App-solutely Modded: Surveying Modded App Market Operators and Original App Developers

The market leading smartphone operating systems, Android and iOS, allow users to install apps through official pre-installed markets. Android also supports app installation from third-party sources, known as sideloading. Sideloading fosters competition and enables open source app markets. However, it also enables the proliferation of markets distributing pirated and modded apps: apps whose features and functionality have been altered by a third-party. Modded apps typically claim to offer users premium or subscription features for free, no ads, free in-app purchases, additional in-game resources, etc.

We previously analysed hundreds of thousands of modded apps in the first large-scale study of Android modded app markets. We compiled a dataset of over 146,000 Android apps from 13 of the most popular modded app markets. Despite the common belief that sideloading in iOS requires a jailbroken iPhone, we have demonstrated this is not the case and compiled a dataset of over 40,000 apps from the 9 most popular iOS modded app markets for an ongoing study of the iOS modded app ecosystem. The datasets are available to academic researchers through the Cambridge Cybercrime Center’s data-sharing agreements.

Original app developers lose significant potential revenue from modded apps due to the free provision of paid apps; the free availability of premium features that require payment in the official app; and changes to advertising identifiers, which took place in 21% of the Android apps with advertising IDs. While users benefit from increased competition and free pirated and modded apps, these apps pose great risks to their privacy and security. Modded apps are significantly riskier than official versions: modded Android and iOS apps are 10 and 33 times more likely to be malicious than their official versions, respectively.

Having studied the modded app ecosystem technically, we wanted to hear directly from the modded market operators about their incentives and motivations, and from the original app developers affected by modded apps about their experience and any effects they noticed as a result of modded apps. In our latest paper, App-solutely Modded: Surveying Modded App Market Operators and Original App Developers, we survey modded app market operators and 717 app developers affected by modded apps. We used our updated Android modded apps dataset to contact 27,000 affected app developers with a personalised digest of our analysis results. 

We find modded market operators have economic incentives to break copyright law and make it difficult to file complaints. They perform little to no security testing of the apps they host and benefit from app developers’ intellectual property. Meanwhile, original developers suffer losses from missed purchases, reduced advertising revenue, additional support requests, and reputational damage. Unfortunately, developers find legal protections are ineffective at preventing modded versions of their apps appearing on third-party stores. Developers are unaware of, or find it hard to use the security features and technical tools which can make the production and use of modded apps much harder.

We also study DMCA compliance of the top 23 modded app markets and confirm our survey findings: DMCA copyright claims are unusable at scale. Our paper concludes with a review of the technical and legal methods hardware and OS vendors, developers and regulators can use to tackle modded apps with the aim of better protecting developers’ intellectual property and revenue as well as user security and privacy. A few weeks ago, Google went a step further than our recommendations and announced the end of sideloading unverified developers’ apps on certified Android devices starting next year.

Taking Down Booters: The Cat-and-Mouse Game

In December 2022, we first blogged about a law enforcement takedown of DDoS-for-hire services (often known as “booters”), sharing details about their changing landscape shortly after the initial seizures. Now that we have more data covering a longer period post-takedown, we can form a clearer picture of the impact.

Booters have been around for years, offering anyone with a few dollars the ability to take offline websites that lack protection from protection services. They are often marketed as harmless “stress-testing” tools, but in practice, they are mostly used for malicious purposes. They’re easy to access, cheap to use, and difficult to stop.

Law enforcement had made several attempts to take them down in the past—for example, in 2018—but the effects were short-lived. This time, multiple law enforcement agencies launched what was likely their largest coordinated campaign to date. There were two waves of takedowns, in December 2022 and May 2023, resulting in about 60 domains being seized in total. In addition to seizing websites, authorities also set up deceptive sites and ran influence campaigns on forums and chat channels to deter potential customers.

We measured the impact of this campaign by incorporating a diverse mix of data. Continue reading Taking Down Booters: The Cat-and-Mouse Game

Cambridge Cybercrime Conference 2025 – Liveblog

The Cambridge Cybercrime Centre‘s eight one day conference on cybercrime was held on Monday, 23rd June 2025, which marked 10 years of the Centre.

Similar to previous “liveblog” coverage of conferences and workshops on Light Blue Touchpaper, here is a “liveblog”-style overview of the talks at this year’s conference.

Sunoo Park — Legal Risks of Security Research

Sunoo discussed researchers receiving restrictive TOS clauses, and risk around adversarial scrutiny. Noting that it’s difficult to distinguish from malicious hacking, and we need to understand the risks. Sunoo highlights particular US laws that creates risk for researchers, sharing a guide they wrote for highlighting these risks. This project grew from colleagues receiving legal threats, as well as clients, wanting to enable informed decisions on how to seek advice, and also try to nudge public discussion on law reforms.

The CFAA was passed a long time ago, around the time of the Wargames film. Computer crime has changed a lot since then. They define computer to be pretty much any computer, where access is unauthorized or exceeds authorized access. One early case was United States vs McDanel, who found a bug in customer software and reported this to customers. This resulted in a legal case where customers were informed of a security flaw, due to the cost of fixing the flaw, but the government later requested the case be overturned. More recently, there was a case of a police database being accessed for a bribe, which was also under the CFAA.

Another law is the DMCA, which states that “no person shall circumvent a technological measure that effectively controls access to work”, and this may apply to captchas, anti-bot, etc.

Sunoo is starting a new study looking at researchers’ lived experiences of legal risk under US/UK law. It can be hard for researchers to talk openly about these, which results in little evidence to counter laws. Furthermore, there’s a lot of anecdotal information. Sunoo would like to hear from US/UK researchers relating to law and researchers.

Alice Hutchings — Ten years of the Cambridge Cybercrime Centre

The Centre was established in 2015, to collect and share cybercrime data internationally. They collect lots of data at scale: forums, chat channels, extremist platforms, DDoS attacks, modded apps, defacements, spam, and more. They share datasets with academics, not for commercial purposes, through agreements to set out ethical and legal constraints. The aim was to help researchers with collecting data at scale, and overcome challenges with working on large datasets. They don’t just collect data, but they do their own research too, around crime types, offenders, places, and responses.

Session 1: Trust, Identity, and Communication in Cybercriminal Ecosystems

Roy Ricaldi— From trust to trade: Uncovering the trust-building mechanisms supporting cybercrime markets on Telegram

Roy is researching trust and cybercrime, and how this is built on Telegram. Cybercrime markets rely on trust to function, and there is existing literature on this topic for forums. Forums have structured systems, such as reputation and escrow, whereas Telegram is more ephemeral, but still used for trading. Roy asks how trust established in this volatile, high-risk environment? Economic theory states without trust, markets can fail.

Roy starts by exploring the market segments found, looking at trust signals, and how frequently users are exposed to these trust systems. Roy notes chat channels can have significant history, and while trust signals exists, users may not be likely to find older trust signals easily. They built a snowballing and classification pipeline, to collect over 1 million messages from 167 telegram communities. Later, they developed a framework, for measuring and simulating trust signals. Findings showed market segments were highly thematic within communities, and trust signals. They used DeepseekV3 for classification, which detected trust signals and market segments with highest accuracy. They found an uneven distribution of trust signals across market segments. For example, piracy content is free so trust signals were low.

They find messages asking for use of escrow, or asking other to “vouch” for sellers. Some of these communities have moderators which would set rules around types of messages. After looking at the distribution, they ran a simulation to see how many signals the users were exposed to. Setup profiles of market segments, communities visited and messages read. They found 70% of users see 5 or less trust signals in their simulation, and all users see at least 1. Over time, these do evolve with digital infrastructure forming a larger peak. They note the importance of understanding how trust works on Telegram, to help find the markets that matter and can cause harm.

John McAlaneyPower, identity and group dynamics in hacking forums

John discussed work in progress around power structures and group dynamics in the CrimeBB dataset. He attended Defcon as a social psychologist, observing the interaction dynamics and how people see themselves within the large size of the conference.

Previous work in identity asked if hacking forums members considered themselves to be a “hacker” and resulted in discussions around the term and labelling. Other previous work looked at themes of what was spoken about in forums, such as legality, honesty, skill acquisition, knowledge, and risk. Through interviews, they found people had contradictory ideas around trust. They note existing hierarchies of power within forums, and evidence of social psychological phenomenon.

Within existing research literature, John found a gap where theories had not been explored necessarily in the online forum setting. They ask if there are groups forming on hacking forums in the same way as other online forums? Also, how does the structure of these groups differ? Are group dynamics different?

He was initially working with a deductive approach for thematic analysis. “Themes do not emerge from thematic analysis”, rather they are exploring what is currently discussed. He is not looking to generalise from thematic analysis, but rather looking into BERT next to see if they are missing any themes from the dataset.

He suggests the main impact will aim to contribute back to sociological literature, and also try to improve threat detection.

Haitao ShiEvaluating the impact of anonymity on emotional expression in drug-related discussions: a comparative study of the dark web and mainstream social media

Haitao looked at self-disclosure, emotional disclosure, and environmental influence on cybercrime forums. They ask how different models of anonymity across chat channels and forums vary, and which different communications styles emerge? They identified drug-related channels and discussions for their analysis, and took steps to clean and check dataset quality. The project used BERTopic, for embedding messages to be used in clustering, then plotted these to visually identify similar topics. To further explore the topics, Haitao used an emotion classifier to detect intent. They found high levels of disgust, anger, and anticipation in their dataset.

Session 2: Technical Threats and Exploitation Tactics

Taro TsuchiyaBlockchain address poisoning

Taro introduces a scenario of sending rent, where the victim seems to make an error selecting a cryptocurrency address. This turns out to have been a poisoned address. Taro aims to identify address poisoning, to see how prevalent this is, and measure the payoff. They identify attack attempts with an algorithm to match transfers with similar addresses in a given time range.

They detect 270M attack transfers on 17M victims, estimating a $84M USD loss. They found loss was much higher on Ethereum, and this lookalike attack is easily generalisable and scalable.

They bundled these into groups, considering two are the same if, they are launched in the same transaction, and they use the same address to pay the transaction fees, or they use the same lookalike address. Clustering found “copying bots”, who copy other transactions for front-running. The attack groups identified are large but heterogenous, and the attack itself is profitable for large groups. Furthermore, larger groups tend to win over smaller groups. Finally, they model lookalike address generation, finding one large group is using GPUs to generate these addresses.

They give suggestions for mitigating these attacks, by adding latency for address generation, disallow zero-value transfers, and increase wallet lengths. They also want to alert users to this risk of this attack.

Marre SlikkerThe human attack surface: understanding hacker techniques in exploiting human elements

Marre is looking at human factors in security, as this is commonly the weakest link in security. Marre asks what do hackers on underground forums discuss regarding the exploitation of human factors in cybercrime? They look at CrimeBB data to analyse topics discussed, identify lexicon used, and give a literature review of how these factors are conceptualised.

They create a bridge between academic human factor language (“demographics”) to hacker language (“target dumb boomers”), and use topic modelling to identify distribution of words used in forum messages.

What were their results? A literature review found a lot of inconsistencies in human factors research terminology. Following this, they asked cybersecurity experts about human factors, and created a list of 328 keywords to help filter the dataset. Topic modelling was then used, however the results were quite superficial, with lots of noise and general chatter.

Kieron Ivy Turk — Technical Tactics Targeting Tech-Abuse

Ivy discussed a project on personal item tracking devices, which have been misused for stalking, domestic abuse, and theft. Companies have developed anti-stalking features to try to mitigate these issues. They ran a study with the Assassins Guild, provided students with trackers to test the efficacy of these features. Their study found nobody used the anti-stalking features, despite everyone in the study knowing there was a possibility they were being stalked. At the time of the study, the scanning apps only tended to detect a subset of tracker brands. Apple and Google have since created an RFC to try to standardise trackers and anti-stalking measures.

Ivy has also been working on IoT security to understand the associated risks. They present a HARMS model to help analyse IoT device security failings. Ivy ran a study to identify harms with IoT devices, asking participants to misuse these. They ask how do attackers discover abusive features? They found participants used and explored the UI to find features available to them. They suggest the idea of a “UI-bounded” adversary is limiting, and rather attackers are “functionality-enabled”.

Ivy asks how can we create technical improvements in future with IoT?

Session 3: Disruption and Resilience in Illicit Online Activities

Anh V. VuAssessing the aftermath: the effects of a global takedown against DDoS-for-hire services

Anh has been following DDoS takedowns by law enforcement. DDoS for hire services provide a platform for taking control of botnets to be used in flooding servers with fake traffic. There is little technical skill needed, and is cheap. These services publicly advertise statistics of daily attacks they contribute to.

Law enforcement continues to takedown DDoS infrastructure, focusing on domain takedowns. Statistics of visitors following the takedowns found 20M visitors, and 34k messages were collected from DDoS support Telegram channels. They also have DDoS UDP amplification data, and collected self-reported DDoS attack data.

Domain takedowns showed that domains returned quickly, 52% returned after the first takedown, and in the second takedown all returned. Domain takedown appears to now have limited effect. Visitor statistics showed large booters operate a franchise business, offering API access to resellers.

Following the first takedown, activity and chat channel messages declined, but this had less impact in the second wave. Operators gave away free extensions to plans, and a few seemed to leave the market.

Their main takeaway is the overall intervention impact is short lived, and suppressing the supply side alone is not enough as the demand continues to persist in the long run. He asks what can be done better for interventions in the future?

Dalya ManatovaModeling organizational resilience: a network-based simulation for analyzing recovery and disruption of ransomware operations

Dalya studies the organisational dynamics and resilience of cybercrime, tracking the evolution and rebranding of ransomware operators. To carry out ransomware, they need infrastructure. This includes selecting targets, executing, ransom negotiation, payment processing, and victim support, and creating leak websites. They break this down further into a complex model, showing the steps of ransomware attacks. They use this to model the task duration involved in attacks, estimating how long it takes to complete a ransomware attack when learning. Following this, they create infrastructure disruption and observe how this process changes. They also model the disruption of members: what happens if they reassign tasks to others or hire a new person?

Marco WähnerThe prevalence and use of conspiracy theories in anonymity networks

Marco first asks what is a conspiracy theory? These all appear to have right-wing extremism, antisemitism, and misinformation. There are a lot of challenges around researching conspiracy theories: the language is often indirect and coded, however this is not a new phenomenon.

What is the influence of environmental and structural of conspiracy theories in anonymised networks? Marco notes this can be for strengthening social ties, and fosters a sense of belonging. Also, this may be used with ideological or social incentives.

Marco asks how we can identify these theories circulating in anonymised networks, and if these are used to promote illicit activities or drive sales? This could then be used to formulate intervention strategies. They took a data-driven approach looking at CrimeBB and ExtremeBB data to find conspiracies, using dictionary keyword searches and topic modelling. Preliminary research found prevalence of conspiracies was very low. ExtremeBB is a bit higher, but still rare.

They provide explanations for the low level of distribution. Keywords are indirect, and can be out of context when searching. Also, conspiratorial communications are not always needed to sell products. They are aiming to strengthen the study design, by coding a subsample to check for false positives, and use classical ML models. They find a dictionary approach may not be a good starting point, and conspiracies are not always used to sell products.

Human HARMS: Threat modelling social harms against technical systems

by Kieron Ivy Turk, Anna Talas, and Alice Hutchings

When talking about the importance of cybersecurity, we often imagine hackers breaking into high-security systems to steal data, money or launch large-scale attacks. However, technology can also be used for harm in everyday situations. Traditional cybersecurity models tend to focus on protecting systems from highly skilled external threats. While these models are effective in cybersecurity, they do not adequately address interpersonal threats that often do not require technical skills, such as those found in cases of domestic abuse.

The HARMS model (Harassment, Access and infiltration, Restrictions, Manipulation and Tampering, and Surveillance) is a new threat modelling framework. It is designed to identify non-technical and human factors harms that are often missed by popular frameworks such as STRIDE. We focused on how everyday technology, such as IoT devices, can be exploited to distress, control or intimidate others. 

The five elements of this model are Harassment, Access and infiltration, Restrictions, Manipulation and tampering, and Surveillance. Definitions and examples of these terms are provided in Table 1.

The threat model can be used to consider how a device or application can be used maliciously to identify ways it can be re-designed to make it more difficult to commit these harms. Imagine, for example, a smart speaker in a shared home. This could be used maliciously by an abusive individual to send distressing messages to be read aloud or set alarms to go off in the middle of the night. Equally, if the smart speaker is connected to calendars, scheduled events could be changed or removed, so users miss meetings and appointments. Furthermore, connected devices can be controlled remotely or automatically through routines, causing changes that the user does not understand and making them doubt their memory or even their sanity. They could also monitor conversations through built-in microphones or keep track of the commands others have used on the device through logs.

Importantly, any one type of harm is not constrained to one of these categories – in fact, many possible attacks will span multiple components of HARMS. For example, a common yet severe online harm is doxxing, wherein a malicious user obtains sensitive information about a user and shares it online. This encompases many aspects of the HARMS models as information may be obtained through surveillance, but be released with the intention of harassing other users. Any threat analysis utilising HARMS must therefore consider possible overlaps between elements to identify a broader set of attacks.

The human HARMS model approaches threat modelling from a unique angle compared to widespread methodologies such as STRIDE. There exist various overlaps between methods, which can be used to obtain a greater perspective of possible attack types. The Surveillance component of HARMS concerns privacy, as does Information disclosure in STRIDE. However, surveillance covers malicious observation and monitoring of people, whilst information disclosure focuses on data storage and leaks. Other risks can only be identified through one model, such as Harassment (HARMS) and Repudiation (STRIDE). We recommend using multiple threat modelling methodologies to encourage improved analysis of security, privacy, and possible misuse of novel systems.

As smart home technology, connected devices, and online platforms continue to evolve, we must think beyond just technical security. Our HARMS model highlights how technology, even when working as intended, can be used to control and harm individuals. By also incorporating human-centered threat modelling into designing software development, in addition to traditional threat modelling methods, we can build safer systems that help prevent them being used for abuse.

Paper: Turk, K. I., Talas, A., & Hutchings, A. (2025). Threat Me Right: A Human HARMS Threat Model for Technical Systems. arXiv preprint arXiv:2502.07116.

A feminist argument against weakening encryption

Attacks on encryption continue. The UK government has just reportedly handed Apple a Technical Capability Notice – effectively demanding that Apple allow UK law enforcement access to their users’ encrypted cloud servers. This is the latest in a series of recent pushes by the UK Government and security services to establish backdoors in the end-to-end encrypted services which underpin a great deal of our lives. It is also happening at a time when many of us are really quite scared of the things that governments – particularly the new US administration -might do with backdoor access to Internet platforms. Undermining the security of these services would also hand further power to the companies who provide these platforms to access this data themselves.

This directly threatens the privacy of Apple’s users – and the safety of many of those who might now be targeted for retribution or enforcement. GCHQ has generally argued that there are useful technical work-arounds that can provide access to legitimate authorities to help with law enforcement. The UK government has particularly used the genuine issue of mass-scale online gender-based violence, particularly the exploitation of children, to make the case for mass-scale surveillance of the Internet in order to detect this violence and arrest those culpable.

The government argument here is that encryption and anonymity provide a safe haven for online abusers – they stop investigations, frustrate prosecutions, and form a major blocker to tackling misogynistic violence. I disagree. I’m going to leave the well-rehearsed technical arguments about whether it is feasible to weaken encryption for the government but not for hostile actors to one side for now (spoiler: it isn’t), and focus on the substantive policy area of gender based violence itself.

Continue reading A feminist argument against weakening encryption

It is time to standardize principles and practices for software memory safety

In an article in the February, 2025 issue of Communications of the ACM, I join 20 coauthors from across academia and industry in writing about the remarkable opportunity for universal strong memory safety in low-level Trusted Computing Bases (TCBs) enabled by recent advances in type- and memory-safe systems programming languages (e.g., the Rust language), hardware memory protection (e.g., our work on CHERI), formal methods, and software compartmentalisation. These technologies are seeing increasing early deployment in critical software TCBs, but struggle to make headway at scale given real costs and potential disruption stemming from their adoption combined with unclear market demand despite widespread recognition of the criticality of this issue. As a result, billions of lines of memory-unsafe C/C++ systems code continue to make up essential TCBs across the industry – including Windows, Linux, Android, iOS, Chromium, OpenJDK, FreeRTOS, vxWorks, and others. We argue that a set of economic factors such as high opportunity costs, negative security impact as an externality, and two-sided incomplete information regarding memory safety lead to limited and slow adoption despite the huge potential security benefit: It is widely believed that these techniques would have deterministically eliminated an estimated 70% of critical security vulnerabilities in these and other C/C++ TCBs over the last decade.

In our article, we describe how developing standards for memory-safe systems may be able to help enable remedies by making potential benefit more clear (and hence facilitating clear signalling of demand) as well as permitting interventions such as:

  • Improving actual industrial practice
  • Enabling acquisition requirements that incorporate memory-safety expectations
  • Enabling subsidies or tax incentives
  • Informing international discussions around software liability
  • Informing policy interventions for specific, critical classes of products/use cases
Continue reading It is time to standardize principles and practices for software memory safety

Join Our 3-Course Series on Cybersecurity Economics

On 2 October, TU Delft are starting a new online three course series on cybersecurity economics. I am co-teaching this course with Michel van Eeten (TU Delft), Daniel Woods (University of Edinburgh), Simon Parkin (TU Delft), Rolf van Wegberg (TU Delft), Tyler Moore (Tulsa Uni) and Rainer Böhme (Innsbruck Uni). The course also features content from Ross Anderson (University of Cambridge), recorded before his passing. Ross was passionate about teaching, and was deeply involved in the design of this MOOC.

The first course on Foundation and Measurement provides you with foundational micro-economic concepts to explain security behavior of various actors involved securing the organization – internally, like IT and business units, and externally, like suppliers, customers and regulators. Next, it equips you with a causal framework to understand how to measure the effectiveness of security controls, as well as what measurements are currently available.

The second course on Users and Attackers presents a wealth of insights on the individuals involved in security: from user behavior to the strategies of attackers. Contrary to popular opinion, users are not the weakest link. If you want to know why do users not follow company security policies, you need to look at the costs imposed on them. On the side of the attackers, there are also clear incentives at work. The course covers the latest insights on attacker behavior.

The third course on Solutions covers answers to overcome the incentive misalignment and information problems at the level of organizations and at the level of markets. Starting with the standard framework of risk management, the course unpacks how to identify solutions in risk mitigation and risk transfer and where risk acceptance might be more rational. Finally, we need to address market failures, since they end up undermining the security of firms and society at large.

Two invitations to Cambridge

Two invitations to Cambridge (UK):

2025-03-25: the Rossfest Symposium, in honour of Ross Anderson (1956-2024)
https://www.cl.cam.ac.uk/events/rossfest/

2025-03-26 and 27: the 29th Security Protocols Workshop
https://www.cl.cam.ac.uk/events/spw/2025/

Start writing, and sign up here for updates on either or both:
https://forms.gle/Em9Hy43aRqrdGmd17

The Rossfest Symposium and its posthumous Festschrift is a celebration and remembrance of our friend and colleague Ross Anderson, who passed away suddenly on 28 March 2024, aged 67.

Ross Anderson FRS FRSE FREng was Professor of Security Engineering at the University of Cambridge and lately also at the University of Edinburgh. He was a world-leading figure in security. He had a gift for pulling together the relevant key people and opening up a new subfield of security research by convening a workshop on the topic that would then go on to become an established series, from Fast Software Encryption to Information Hiding, Scrambling for Safety, Workshop on Economics and Information Security, Security and Human Behavior and so forth. He co-authored around 300 papers. His encyclopedic Security Engineering textbook (well over 1000 pages) is dense with both war stories and references to research papers. An inspiring and encouraging supervisor, Ross graduated around thirty PhD students. And as a contagiously enthusiastic public speaker he inspired thousands of researchers around the world.

The Rossfest Symposium is an opportunity for all of us who were touched by Ross to get together and celebrate his legacy.

The Festschrift volume

Scientific papers

We solicit scientific contributions to a posthumous Festschrift volume, in the form of short, punchy papers on any security-related topic. These submissions will undergo a lightweight review process by a Program Committee composed of former PhD students of Ross:

Accepted papers will be published in the Festschrift book and presented at the event. For a subset of the accepted papers, the authors will be invited to submit an expanded version to a special issue of the Journal of Cybersecurity honouring Ross’s scholarly contributions and legacy.

Submissions are limited to five pages in LNCS format (we did say short and punchy!) and will get an equally short presentation slot at the Rossfest. Let’s keep it snappy, as Ross himself would have liked. Five pages excluding bibliography and any appendices, that is, and maximum eight pages total.

Topic-wise, anything related to security, taking the word in its broadest sense, is fair game, from cryptography and systems to economics, psychology, policy and much more, spanning the wide spectrum of fields that Ross himself explored over the course of his career. But make it a scientific contribution rather than just an opinion piece.

Authors will grant us a licence to publish and distribute their articles in the Festschrift but will retain copyright and will be able to put their articles on their web pages or resubmit them wherever else they like. We won’t ask for article charges for publishing in the Festschrift. Bound copies of the Festschrift volume will be available to purchase at cost during the Rossfest Symposium, or later through print-on-demand. A DRM-free PDF will be available online at no charge.

Informal memories

We also solicit informal “cherished memories” contributions along the lines of those collected by Ahn Vu at anderson.love. These too will be collected in the volume and a selection of them will be presented orally at the event.

The Rossfest Symposium

The Rossfest Symposium will last the whole day and will take place at the Computer Laboratory (a.k.a. the Department of Computer Science and Technology of the University of Cambridge), where Ross taught, researched and originally obtained his own PhD. Street address: 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK.

Attendance at the Rossfest Symposium is free and not conditional on the submission of a contribution, but registration will be required for us to manage numbers and catering.

In the evening there shall also be a formal celebration banquet at Trinity College. To attend, please purchase a ticket. Registration and payment links shall appear on this page in due course. Street address: Trinity Street, Cambridge CB2 1TQ, UK.

We have timed the Rossfest to be adjacent in time and space to the Security Protocols Workshop, an event that Ross regularly attended. The SPW will take place in Trinity College Cambridge on 26 and 27 March 2025. This will allow you to attend both events with a single trip to Cambridge. Note that attendance at SPW requires presenting a position paper: unlike the Rossfest, at SPW all attendees must also speak.

Accommodation in Cambridge

The chosen dates are out of term, meaning you might be able to book a room in one of the 31 colleges through www.universityrooms.com. Otherwise, consider www.airbnb.comwww.booking.comwww.expedia.com or your favourite online booking aggregator.

Sign up

To receive notifications (e.g. “the registration and payment links are now up”), sign up on this Google form. Self-service unsubscribe at any time.

Dates

25 November 2024: Deadline for submission of Festschrift articles
23 December 2024: Invitations to authors to present orally
13 January 2025: Early bird (discounted) registration deadline for banquet
10 February 2025: Final registration deadline for banquet and symposium
25 March 2025: Rossfest Symposium (and optional banquet)
26-27 March 2025: Security Protocols Workshop (unrelated but possibly of interest)

The Twenty-ninth International Workshop on Security Protocols will take place from Wednesday 26 March to Thursday 27 March 2025 in Cambridge, United Kingdom. It will be dedicated to the memory of Ross Anderson and preceded by the Rossfest Symposium, which will take place on Tuesday 25 March 2025, also in Cambridge, UK. Come to both!

As in previous years, attendance at the International Workshop on Security Protocols is by invitation only.  (How do I get invited? Submit a position paper.)

Theme

The theme of the 2025 workshop is: “Controversial Security – In honour of Ross Anderson”. In other words, “any security topic that Ross Anderson might have wanted to debate with you”, which leaves you with plenty of leeway.

This is a workshop for discussion of novel ideas, rather than a conference for finished work. We seek papers that are likely to stimulate an interesting discussion. New authors are encouraged to browse through past volumes of post-proceedings (search for Security Protocols Workshop in the Springer LNCS series) to get a flavour for the variety and diversity of topics that have been accepted in past years, as well as the lively discussion that has accompanied them.

Details

The long-running Security Protocols Workshop has hosted lively debates with many security luminaries (the late Robert Morris, chief scientist at the NSA and well known for his pioneering work on Unix passwords, used to be a regular) and continues to provide a formative event for young researchers. The post-proceedings, published in LNCS, contain not only the refereed papers but the curated transcripts of the ensuing discussions (see the website for pointers to past volumes).

Attendance is by invitation only. To be considered for invitation you must submit a position paper: it will not be possible to come along as just a member of the audience. Start writing now! “Writing the paper is how you develop the idea in the first place”, in the wise words of Simon Peyton-Jones.

The Security Protocols Workshop is, and has always been, highly interactive. We actively encourage participants to interrupt and challenge the speaker. The presented position papers will be revised and enhanced before publication as a consequence of such debates. We believe the interactive debates during the presentations, and the spontaneous technical discussions during breaks, meals and the formal dinner, are part of the DNA of our workshop. We encourage you to present stimulating and disruptive ideas that are still at an initial stage, rather than “done and dusted” completed papers of the kind that a top-tier conference would expect. We are interested in eliciting interesting discussion rather than collecting archival material.

Submissions

Short indicative submissions are preferred. You will have the opportunity to extend and revise your paper both before the pre-proceedings are issued, and again after the workshop. At the workshop, you will be expected to spend a few minutes introducing the idea of your paper, in a way that facilitates a longer more general discussion. Pre-proceedings will be provided at the workshop. See the Submission page for more details.

Committee

• Fabio Massacci (Program Chair), University of Trento / Vrije Universiteit Amsterdam
• Frank Stajano (General Chair), University of Cambridge
• Vashek (Vaclav) Matyas, Masaryk University
• Jonathan Anderson, Memorial University
• Mark Lomas, Capgemini

Accommodation in Cambridge

The chosen dates are out of term, meaning you might be able to book a room in one of the 31 colleges through www.universityrooms.com. Otherwise, consider www.airbnb.comwww.booking.comwww.expedia.com or your favourite online booking aggregator.

Dates

25 November 2024: Submission of position papers
23 December 2024: Invitations to authors
13 January 2025: Early bird (discounted) registration deadline
3 February 2025: Revised papers due
10 February 2025: Final registration deadline
25 March 2025: Rossfest Symposium (unrelated but possibly of interest)
26-27 March 2025: Security Protocols Workshop

For further details visit the web page at the top of this message. To be notified when the registration and paper submission pages open, , sign up on this Google form. Self-service unsubscribe at any time.

Security and Human Behavior 2024

The seventeenth Security and Human Behavior workshop was hosted by Bruce Schneier at Harvard University in Cambridge, Massachusetts on the 4th and 5th of June 2024 (Schneier blog).

This playlist contains audio recordings of most of the presentations, curated with timestamps to the start of each presentation. Click the descriptions to see them.

On the lunch of the first day, several attendees remembered the recently departed Ross Anderson, who co-founded this workshop with Bruce Schneier and Alessandro Acquisti in 2008. That recording is in the playlist too.

Kami Vaniea kept up Ross’s tradition by liveblogging most of the event.

I’ll be hosting next year’s SHB at the University of Cambridge.