Two invitations to Cambridge

Two invitations to Cambridge (UK):

2025-03-25: the Rossfest Symposium, in honour of Ross Anderson (1956-2024)

2025-03-26 and 27: the 29th Security Protocols Workshop

Start writing, and sign up here for updates on either or both:

The Rossfest Symposium and its posthumous Festschrift is a celebration and remembrance of our friend and colleague Ross Anderson, who passed away suddenly on 28 March 2024, aged 67.

Ross Anderson FRS FRSE FREng was Professor of Security Engineering at the University of Cambridge and lately also at the University of Edinburgh. He was a world-leading figure in security. He had a gift for pulling together the relevant key people and opening up a new subfield of security research by convening a workshop on the topic that would then go on to become an established series, from Fast Software Encryption to Information Hiding, Scrambling for Safety, Workshop on Economics and Information Security, Security and Human Behavior and so forth. He co-authored around 300 papers. His encyclopedic Security Engineering textbook (well over 1000 pages) is dense with both war stories and references to research papers. An inspiring and encouraging supervisor, Ross graduated around thirty PhD students. And as a contagiously enthusiastic public speaker he inspired thousands of researchers around the world.

The Rossfest Symposium is an opportunity for all of us who were touched by Ross to get together and celebrate his legacy.

The Festschrift volume

Scientific papers

We solicit scientific contributions to a posthumous Festschrift volume, in the form of short, punchy papers on any security-related topic. These submissions will undergo a lightweight review process by a Program Committee composed of former PhD students of Ross:

Accepted papers will be published in the Festschrift book and presented at the event. For a subset of the accepted papers, the authors will be invited to submit an expanded version to a special issue of the Journal of Cybersecurity honouring Ross’s scholarly contributions and legacy.

Submissions are limited to five pages in LNCS format (we did say short and punchy!) and will get an equally short presentation slot at the Rossfest. Let’s keep it snappy, as Ross himself would have liked. Five pages excluding bibliography and any appendices, that is, and maximum eight pages total.

Topic-wise, anything related to security, taking the word in its broadest sense, is fair game, from cryptography and systems to economics, psychology, policy and much more, spanning the wide spectrum of fields that Ross himself explored over the course of his career. But make it a scientific contribution rather than just an opinion piece.

Authors will grant us a licence to publish and distribute their articles in the Festschrift but will retain copyright and will be able to put their articles on their web pages or resubmit them wherever else they like. We won’t ask for article charges for publishing in the Festschrift. Bound copies of the Festschrift volume will be available to purchase at cost during the Rossfest Symposium, or later through print-on-demand. A DRM-free PDF will be available online at no charge.

Informal memories

We also solicit informal “cherished memories” contributions along the lines of those collected by Ahn Vu at These too will be collected in the volume and a selection of them will be presented orally at the event.

The Rossfest Symposium

The Rossfest Symposium will last the whole day and will take place at the Computer Laboratory (a.k.a. the Department of Computer Science and Technology of the University of Cambridge), where Ross taught, researched and originally obtained his own PhD. Street address: 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK.

Attendance at the Rossfest Symposium is free and not conditional on the submission of a contribution, but registration will be required for us to manage numbers and catering.

In the evening there shall also be a formal celebration banquet at Trinity College. To attend, please purchase a ticket. Registration and payment links shall appear on this page in due course. Street address: Trinity Street, Cambridge CB2 1TQ, UK.

We have timed the Rossfest to be adjacent in time and space to the Security Protocols Workshop, an event that Ross regularly attended. The SPW will take place in Trinity College Cambridge on 26 and 27 March 2025. This will allow you to attend both events with a single trip to Cambridge. Note that attendance at SPW requires presenting a position paper: unlike the Rossfest, at SPW all attendees must also speak.

Accommodation in Cambridge

The chosen dates are out of term, meaning you might be able to book a room in one of the 31 colleges through Otherwise, consider or your favourite online booking aggregator.

Sign up

To receive notifications (e.g. “the registration and payment links are now up”), sign up on this Google form. Self-service unsubscribe at any time.


25 November 2024: Deadline for submission of Festschrift articles
23 December 2024: Invitations to authors to present orally
13 January 2025: Early bird (discounted) registration deadline for banquet
10 February 2025: Final registration deadline for banquet and symposium
25 March 2025: Rossfest Symposium (and optional banquet)
26-27 March 2025: Security Protocols Workshop (unrelated but possibly of interest)

The Twenty-ninth International Workshop on Security Protocols will take place from Wednesday 26 March to Thursday 27 March 2025 in Cambridge, United Kingdom. It will be dedicated to the memory of Ross Anderson and preceded by the Rossfest Symposium, which will take place on Tuesday 25 March 2025, also in Cambridge, UK. Come to both!

As in previous years, attendance at the International Workshop on Security Protocols is by invitation only.  (How do I get invited? Submit a position paper.)


The theme of the 2025 workshop is: “Controversial Security – In honour of Ross Anderson”. In other words, “any security topic that Ross Anderson might have wanted to debate with you”, which leaves you with plenty of leeway.

This is a workshop for discussion of novel ideas, rather than a conference for finished work. We seek papers that are likely to stimulate an interesting discussion. New authors are encouraged to browse through past volumes of post-proceedings (search for Security Protocols Workshop in the Springer LNCS series) to get a flavour for the variety and diversity of topics that have been accepted in past years, as well as the lively discussion that has accompanied them.


The long-running Security Protocols Workshop has hosted lively debates with many security luminaries (the late Robert Morris, chief scientist at the NSA and well known for his pioneering work on Unix passwords, used to be a regular) and continues to provide a formative event for young researchers. The post-proceedings, published in LNCS, contain not only the refereed papers but the curated transcripts of the ensuing discussions (see the website for pointers to past volumes).

Attendance is by invitation only. To be considered for invitation you must submit a position paper: it will not be possible to come along as just a member of the audience. Start writing now! “Writing the paper is how you develop the idea in the first place”, in the wise words of Simon Peyton-Jones.

The Security Protocols Workshop is, and has always been, highly interactive. We actively encourage participants to interrupt and challenge the speaker. The presented position papers will be revised and enhanced before publication as a consequence of such debates. We believe the interactive debates during the presentations, and the spontaneous technical discussions during breaks, meals and the formal dinner, are part of the DNA of our workshop. We encourage you to present stimulating and disruptive ideas that are still at an initial stage, rather than “done and dusted” completed papers of the kind that a top-tier conference would expect. We are interested in eliciting interesting discussion rather than collecting archival material.


Short indicative submissions are preferred. You will have the opportunity to extend and revise your paper both before the pre-proceedings are issued, and again after the workshop. At the workshop, you will be expected to spend a few minutes introducing the idea of your paper, in a way that facilitates a longer more general discussion. Pre-proceedings will be provided at the workshop. See the Submission page for more details.


• Fabio Massacci (Program Chair), University of Trento / Vrije Universiteit Amsterdam
• Frank Stajano (General Chair), University of Cambridge
• Vashek (Vaclav) Matyas, Masaryk University
• Jonathan Anderson, Memorial University
• Mark Lomas, Capgemini

Accommodation in Cambridge

The chosen dates are out of term, meaning you might be able to book a room in one of the 31 colleges through Otherwise, consider or your favourite online booking aggregator.


25 November 2024: Submission of position papers
23 December 2024: Invitations to authors
13 January 2025: Early bird (discounted) registration deadline
3 February 2025: Revised papers due
10 February 2025: Final registration deadline
25 March 2025: Rossfest Symposium (unrelated but possibly of interest)
26-27 March 2025: Security Protocols Workshop

For further details visit the web page at the top of this message. To be notified when the registration and paper submission pages open, , sign up on this Google form. Self-service unsubscribe at any time.

Security and Human Behavior 2024

The seventeenth Security and Human Behavior workshop was hosted by Bruce Schneier at Harvard University in Cambridge, Massachusetts on the 4th and 5th of June 2024 (Schneier blog).

This playlist contains audio recordings of most of the presentations, curated with timestamps to the start of each presentation. Click the descriptions to see them.

On the lunch of the first day, several attendees remembered the recently departed Ross Anderson, who co-founded this workshop with Bruce Schneier and Alessandro Acquisti in 2008. That recording is in the playlist too.

Kami Vaniea kept up Ross’s tradition by liveblogging most of the event.

I’ll be hosting next year’s SHB at the University of Cambridge.

Cambridge Cybercrime Conference 2024 – Liveblog

The Cambridge Cybercrime Centre‘s seventh one day conference on cybercrime was held on Monday, 10th June 2024

Similar to previous “liveblog” coverage of conferences and workshops on Light Blue Touchpaper, here is a “liveblog”-style overview of the talks at this year’s conference.

L. Jean Camp – Global Cyber Resilience Using a Public Health Model of eCrime (Keynote)

Who gets phished? This still hasn’t changed much in 20 years. We still don’t know how people are targeted, or even if they are targeted. People need to identify security indicators, domain names, etc., and this is hard. Current practice with warnings does not provide what people need. While people can learn how to use bad interfaces, we can’t expect people to pay attention all the time and without interruption. Expertise alone is not adequate: LastPass devs were phished. She looked at phishing factors, and asked how good each population was at identifying phishing and legitimate websites, finding familiarity and gender did not have a significant difference for phishing websites, but found familiarity was important for identifying legitimate websites. Later, they asked participants about security expertise. We tend to write warnings for ourselves (security experts), rather than for end users. They also compared risk perception across populations. Overall, they found computer expertise (positive) and age (negative) were the primary factors in identifying phishing pages. How can we learn from public health to provide more effective warnings which work for the wider general population?

Gabriella Williams – Beyond Borders: Exploring Security, Privacy, Cultural, and Legal Challenges in Metaverse Sexual Harassment

PhD researcher in digital identity and age assurance methods to mitigate against virtual harms. The virtual reality environment (metaverse) has new risks and harms, by creating a new environment with anonymity where people can be whoever they want to be. Gabriella asks if sexual harassment is a crime in the metaverse? There is no legal framework currently, and there are varying jurisdictions online. Metaverse has cultural issues, with standing close to someone, making unwanted contact, and inappropriate jokes. How can this be moderated? Lots of issues with collecting metadata on social interactions, biometric data, and security issues with over reliance on automation and threats to authentication and integrity. Their current research is looking at challenges around implementing age assurance, and how identities can be authenticated.

Bomin Keum – The Incel Paradox: Does Collective Self-Loathing Facilitate Radicalisation or Belonging?

What don’t we know and why don’t we know it? We have a hard time agreeing on what radicalisation is, but this is a process rather than instances of extremist violence. Online radicalisation is facilitated through anonymity, perceived strength in numbers, and too much information spread and absorbed quickly. Bomin considers the use of the Us vs Them framework: collectively constructed perception differentiating the in-group from the out-group. Incel communities show negativity within the group as well as out, which is different to other communities. The Us vs Them framework has “us” as self-directed victimhood with men deprived of their “right to sex” whereas the “them” refers to a perception of society giving “too much freedom to women”. What are the self and other narrative framings, and which topics are associated with self vs other narrative frames? Bomin compares 2019 and 2020 datasets around the start of the pandemic. Internal group themes have helplessness and victimisation, whereas outside has unfair advantages and shameful other. Collectively, there are narratives of community, violence, and vision. They note you can’t take discussions at face value, as the language used can be quite extreme and text-level analysis may not reflect intent. Also, there is some shifting from blame to mockery of others. Not all radical actors commit violence but can inform facilitators behind intensification. Applying theories to these communities can be questionable, due to the unique aspects of the communities, and needs further data-driven research to improve on theory.

Jason Nurse – Ransomware Harms and the Victim Experience

Supply chain issue with St. Thomas’ Hospital last week, where a supplier of hospitals was hit by ransomware, and a critical incident was declared in London. Focus in the media on the financial impact, but what are the other harms of this, on both individuals and society? Jason carried out a literature review, and ran workshops and interviews alongside harm modelling to explore effects. What do we know already from the literature, and what can we learn from individuals? Interviews were focused on people who were subject to a ransomware attack or had professional experience of supporting organisations affected by ransomware. This includes cyber insurance organisations, which are now a big player. Gathering qualitative data from interviews, and using thematic analysis. Findings show this is a serious risk for all organisations, including small businesses: “everything you relied on yesterday doesn’t work today”. Can also create reputational harm for organisations. Applying the idea of orders of harm: first-order are harms directly to the person or org, second-order are downstream orgs and individuals, and third-order are the economy and society. Implications include a loss of trust in law enforcement, reduced faith in public services, and normalisation of cybercrime. Other impacts include harms to staff: staff members having to deal with the situation, including overworking to resolve issues. Highlights potential correlations between burnout and cybersecurity issues. Next, Jason looks at how to model harms. They gather data on well publicised events and to establish relationships between harms. This finds many downstream harms: we can more deeply explore harms arising throughout society rather than just “the data was encrypted”.

Ethan Thomas – Operation Brombenzyl and Operation Cronos

DDoS for hire continues to be a threat, enabling easy attacks against infrastructure, and these are targeted by site take downs and arrests. Finding a new way to provide a longer lasting impact, disrupting the marketplace. Using splash pages to deter users, and also creating law enforcement-run DDoS for hire websites. Some of the disguised sites were “seized”, others were “outed” as NCA controlled, and some are still running. Second operation is Cronos, again using deception but applied to ransomware attacks. Finding broad deterrence messaging doesn’t always work well, now there is focus on showing victims cases where cybercriminals did not uphold their promises.

Luis Adan Saavedra del Toro – Sideloading of Modded Apps: User Choice, Security and Piracy

What are modded apps, and why do users use them? Android users have the capability of installing any app they download from the internet, outside of the Google Play Store. Third-party stores have ads and user review features. Modded apps have unlocked pro features, such as a modded Spotify app to bypass ads and other paid features. Modded gaming apps have free in-app purchases. Luis found over 400 modded Android app markets, and crawled the 13 most popular, creating the ModZoo dataset. Most of these modded apps are games, and lots of duplicates across markets. None of the markets had any payment infrastructure. They discovered apps with changed code had added additional permissions and advertising libraries. Some apps with Ad IDs had been changed. 9% of those with modded code were malicious. iOS has misconceptions around jailbreaking. iOSModZoo has ~30k apps. iOSZoo is a dataset of ~55k free App Store apps. Most iOS modded apps are pirated copies of paid apps.

Felipe Moreno-Vera and Daniel S. Menasché – Beneath the Cream: Unveiling Relevant Information Points from CrimeBB with Its Ground Truth Labels

Looking at exploits which are shared on underground forums. The team used three types of labels: post-type, intent, and crime-type, which they used to complement their approach to tracking keywords, their usage, and different vulnerability levels discussed. They create a classifier for threats, so they can identify what is being discussed. They use regex to identify CVEs, and a function to identify language. They note the labels used were only available for one site, and later use ChatGPT to create more labels for posts. They find ChatGPT improves on existing labels.

Jeremy D. Seideman, Shoufu Luo, and Sven Dietrich – The Guy In The Chair: Examining How Cybercriminals Use External Resources to Supplement Underground Forum Conversations

“Guy in the chair” is the support network that “connects the dots”. They looked at underground forum conversations to identify what this support network is. Do people post URLs, do they advertise things, do they talk about other communications? What is the wider context? Past literature shows that forums work best as a social network, forming communities. Their project examines the use of offensive AI usage, presenting their data pipeline, which they use to clean data prior to using topic transfer models. Following this, they identified buckets of URLs. The majority of known links were other forums, code sharing, image hosting, and file sharing. Lots of the links had link rot. Future work will further explore the application of analysis methods used with archaeological count data to their dataset.

Anh V. Vu – Yet Another Diminishing Spark: Low-level Cyberattacks in the Israel-Gaza Conflict

Anh notes differing perspectives of cyberwar in the world media, with a strong focus on high-profile cyber attacks. However, what is happening with low-level cybercrime actors and the services supporting these attacks? They are using data from website defacement attacks and UDP amplification DDoS attacks, alongside collections of volunteer hacking discussions. They contrast the conflicts of Russia vs Ukraine and Israel vs. Gaza. Anh finds interest in low-level DDoS and defacement attacks dropped off quickly, although notes that these findings should not be confounded with state-sponsored cyber attacks.

Dalyapraz Manatova – Relationships Matter: Reconstructing the Organisational Structure of a Ransomware Group

Dalyapraz has been studying dynamics of cybercrime networks, thinking about these as a socio-technical complex system, with technical, economical, and social factors. Existing literature shows that eCrime has “communities”, with admins and moderators. When these communities are disrupted, they often move to other places. Participants often have different pseudonyms for who they are communicating with, e.g. as an administrator or to trade. However, these communities are more like organisations, with roles, tasks, scale, scope. Follows a similar structure to aaS services.

Marilyne Ordekian – Investigating Wrench Attacks: Physical Attacks Targeting Cryptocurrency Users

Wrench attacks have been around since the start of Bitcoin, yet have received little academic attention. Marilyne gathered data on wrench attacks through Bitcoin Talk discussions and interviews. Incidents were reported across different areas, from 2011 to 2021. There were peaks of incidents, which coincided with bitcoin reaching an all-time high. Why? Potential reasons include financial gain, theft is easier than hacking, and no account transfer limits. They found that 25% of these incidents occurred during in-person meet ups. Are wrench attacks reported? No, they are underreported. They propose safety mechanisms for individuals, including not bragging, diversifying of funds, and digital safety practices. Also, they suggest existing regulations could be strengthened, such as improved KYC verification to consider the risk of wrench attacks. System design changes could include redesigning apps to hide balance amounts.

Mariella Mischinger – Investigating and Comparing Discussion Topics in Multilingual Underground Forums

Mariella finds prior literature on forums is often missing understanding of the content and does not find niche topics. Also, there is a lack of research into multilingual underground forums, and a lack of data on invite-only forums. Datasets contain lots of noisy, informal language. They found sentence embeddings were useful to cluster the content into topics, as this included the context and intention in sentences. They extracted topics from the clusters of sentences using LDA, with cosine similarity then finding similar topics across languages. Mariella then finds this method can be used to find pockets of knowledge: topics only discussed in one language. Further work identified dark keywords, combining neologisms with groups of keywords.

RIP Ross Anderson

Someone else will undoubtedly say it much better than I will here but one of us has to break the very sad news: Ross Anderson died yesterday.

His enthusiasm, his wide-spectrum intellectual curiosity and his engaging prose were unmatched. He stood up vigorously for the causes he believed in. He formed communities around the new topics he engaged with, from information hiding to fast software encryption, security economics, security and human behaviour and more. He served as an inspiring mentor for generations of graduate students at Cambridge—I know first hand, as I was fortunate enough to be admitted as his PhD student when he was still a freshly minted lecturer and had not graduated any students yet. I learnt my trade as a Cambridge Professor from him and will be forever grateful, as will dozens of my “academic brothers” who were also supervised by him, several of whom post regularly on this blog.

Ross, thank you so much for your lively, insightful and stimulating contributions to every subfield of security. You leave a big void that no one will be able to fill. I will miss you.


Owl, a new augmented password-authenticated key exchange protocol

In 2008, I wrote a blog to introduce J-PAKE, a password-authenticated key exchange (PAKE) protocol (joint work with Peter Ryan). The goal of that blog was to invite public scrutiny of J-PAKE. Sixteen years later, I am pleased to say that no attacks on J-PAKE have been found and that the protocol has been used in many real-world applications, e.g., Google Nest, ARM Mbed, Amazon Fire stick, Palemoon sync and Thread products.  

J-PAKE is a balanced PAKE, meaning that both sides must hold the same secret for mutual authentication. In the example of the J-PAKE-based IoT commissioning process (part of the Thread standard), a random password is generated to authenticate the key exchange process and is discarded afterwards. However, in some cases, it is desirable to store the password. For example, in a client-server application, the user knows a password, while the server stores a one-way transformation of the password. 

PAKE protocols designed for the above client-server setting are called augmented (as opposed to the balanced in the peer-to-peer setting). So far the only augmented PAKE protocol that has enjoyed wide use is SRP-6a, e.g., used in Apple’s iCloud, 1Password and Proton mail. SRP-6a Is the latest version of the 1998 SRP-3 scheme due to Wu after several revisions to address attacks. Limitations of SRP-6a are well known, including heuristic security, a lack of efficiency (due to the mandated use of a safe prime as the modulus) and no support for elliptic curve implementation. 

In 2018, an augmented PAKE scheme called OPAQUE was proposed by Jarecki, Krawczyk and Xu. In 2020, IETF selected OPAQUE as a candidate for standardization. A theoretical advantage promoted in favour of OPAQUE is the so-called pre-computation security. When the server is compromised, an offline dictionary attack to uncover the plaintext password is possible for both OPAQUE and SRP-6a. For OPAQUE, its pre-computation security means that the attacker can’t use a pre-computed table, whilst for SRP-6a, the attacker may use a pre-computed table, but it must be a unique table for each user, which requires a large amount of computation and storage. Therefore, the practical advantage provided by pre-computation security is limited. 

Apart from pre-computation security, OPAQUE has a few open issues which leave it unclear whether it will replace SRP-6a in practice. First, the original OPAQUE protocol defined in the 2018 paper leaks password update information to passive attackers, whilst SRP-6a doesn’t have this leakage. Furthermore, OPAQUE relies on a constant-time hash-to-curve function available for all elliptic curves, but details about the instantiation of this function remain to be established. Finally, the 2018 paper didn’t give a full specification of OPAQUE. In 2020, when OPAQUE was selected by IETF, its specification remained incomplete. The task of completing the spec was left as a post-selection exercise; today, it is still not finished.  

Motivated by the recognised limitations of SRP-6a and OPAQUE, we propose a new augmented PAKE scheme called Owl (joint work with Samiran Bag, Liqun Chen and Paul van Oorschot). Owl is obtained by efficiently adapting J-PAKE to an augmented setting, providing the augmented security against server compromise with yet lower computation than J-PAKE. To the best of our knowledge, Owl is the first augmented PAKE solution that provides systematic advantages over SRP-6a in terms of security, computation, round efficiency, message sizes, and cryptographic agility. 

On 5 March 2024, I gave a presentation on Owl at Financial Cryptography and Data Security 2024 (FC’24) in Curacao. The purpose of this blog is to invite public scrunity of Owl. See the Owl paper and the FC slides for further details. An open-source Java program that shows how Owl works in an elliptic curve setting is freely available. We hope security researchers and developers will find Owl useful, especially in password-based client-server settings where a PKI is unavailable (hence TLS doesn’t apply). Same as J-PAKE, Owl is not patented and is free to use.

Grasping at straw

Britain’s National Crime Agency has spent the last five years trying to undermine encryption, saying it might stop them arresting hundreds of men every month for downloading indecent images of children. Now they complain that most of the men they do prosecute escape jail. Eight in ten men convicted of image offences escaped an immediate prison sentence, and the NCA’s Director General Graeme Biggar describes this as “striking”.

I agree, although the conclusions I draw are rather different. In Chatcontrol or Child Protection? I explained how the NCA and GCHQ divert police resources from tackling serious contact offences, such as child rape and child murder, to much less serious secondary offences around images of historical abuse and even synthetic images. The structural reasons are simple enough: they favour centralised policing over local efforts, and electronic surveillance over community work.

One winner is the NCA, which apparently now has 200 staff tracing people associated with alarms raised automatically by Big Tech’s content surveillance, while the losers include Britain’s 43 local police forces. If 80% of the people arrested as a result of Mr Biggar’s activities don’t even merit any jail time, then my conclusion is that the Treasury should cut his headcount by at least 160, and give each Chief Constable an extra 3-4 officers instead. Frontline cops agree that too much effort goes into image offences and not enough into the more serious contact crimes.

Mr Biggar argues that Facebook is wicked for turning on end-to-end encryption in Facebook Messenger, as won’t be able to catch as many bad men in future. But if encryption stops him wasting police time, well done Zuck! Mr Biggar also wants Parliament to increase the penalties. But even though Onan was struck dead by God for spilling his seed upon the ground, I hope we can have more rational priorities for criminal law enforcement in the 21st century.

How hate sites evade the censor

On Tuesday we had a seminar from Liz Fong-Jones entitled “Reverse engineering hate” about how she, and a dozen colleagues, have been working to take down a hate speech forum called Kiwi Farms. We already published a measurement study of their campaign, which forced the site offline repeatedly in 2022. As a result of that paper, Liz contacted us and this week she told us the inside story.

The forum in question specialises in personal attacks, and many of their targets are transgender. Their tactics include doxxing their victims, trawling their online presence for material that is incriminating or can be misrepresented as such, putting doctored photos online, and making malicious complaints to victims’ employers and landlords. They describe this as “milking people for laughs”. After a transgender activist in Canada was swatted, about a dozen volunteers got together to try to take the site down. They did this by complaining to the site’s service providers and by civil litigation.

This case study is perhaps useful for the UK, where the recent Online Safety Bill empowers Ofcom to do just this – to use injunctions in the civil courts to take down unpleasant websites.

The Kiwi Farms operator has for many months resisted the activists by buying the services required to keep his website up, including his data centre floor space, his transit, his AS, his DNS service and his DDoS protection, through a multitude of changing shell companies. The current takedown mechanisms require a complainant to first contact the site operator; he publishes complaints, so his followers can heap abuse on them. The takedown crew then has to work up a chain of suppliers. Their processes are usually designed to stall complainants, so that getting through to a Tier 1 and getting them to block a link takes weeks rather than days. And this assumes that the takedown crew includes experienced sysadmins who can talk the language of the service providers, to whose technical people they often have direct access; without that, it would take months rather than weeks. The net effect is that it took a dozen volunteers thousands of hours over six months from October 22 to April 23 to get all the Tier 1s to drop KF, and over $100,000 in legal costs. If the bureaucrats at Ofcom are going to do this work for a living, without the skills and access of Liz and her team, it could be harder work than they think.

Liz’s seminar slides are here.

Hacktivism, in Ukraine and Gaza

People who write about cyber-conflict often talk of hacktivists and other civilian volunteers who contribute in various ways to a cause. Might the tools and techniques of cybercrime enable its practitioners to be effective auxiliaries in a real conflict? Might they fall foul of the laws of war, and become unlawful combatants?

We have now measured hacktivism in two wars – in Ukraine and Gaza – and found that its effects appear to be minor and transient in both cases.

In the case of Ukraine, hackers supporting Ukraine attacked Russian websites after the invasion, followed by Russian hackers returning the compliment. The tools they use, such as web defacement and DDoS, can be measured reasonably well using resources we have developed at the Cambridge Cybercrime Centre. The effects were largely trivial, expressing solidarity and sympathy rather than making any persistent contribution to the conflict. Their interest in the conflict dropped off rapidly.

In Gaza, we see the same pattern. After Hamas attacked Israel and Israel declared war, there was a surge of attacks that peaked after a few days, with most targets being strategically unimportant. In both cases, discussion on underground cybercrime forums tailed off after a week. The main difference is that the hacktivism against Israel is one-sided; supporters of Palestine have attacked Israeli websites, but the number of attacks on Palestinian websites has been trivial.

Extending transparency, and happy birthday to the archive

I was delighted by two essays by Anton Howes on The Replication Crisis in History Open History. We computerists have long had an open culture: we make our publications open, as well as sharing the software we write and the data we analyse. My work on security economics and security psychology has taught me that this culture is not yet as well-developed in the social sciences. Yet we do what we can. Although we can’t have official conference proceedings for the Workshop on the Economics of Information Security – as then the economists would not be able to publish their papers in journals afterwards – we found a workable compromise by linking preprints from the website and from a liveblog. Economists and psychologists with whom we work have found their citation counts and h-indices boosted by our publicity mechanisms; they have incentives to learn.

A second benefit of transparency is reproducibility, the focus of Anton’s essay. Scholars are exposed to many temptations, which vary by subject matter, but are more tempting when it’s hard for others to check your work. Mathematical proofs should be clear and elegant but are all too often opaque or misleading; software should be open-sourced for others to play with; and we do what we can to share the data we collect for research on cybercrime and abuse.

Anton describes how more and more history books are found to have weak foundations, where historians quote things out of context, ignore contrary evidence, and elaborate myths and false facts into misleading stories that persist for decades. How can history correct itself more quickly? The answer, he argues, is Open History: making as many sources publicly available as possible, just like we computerists do.

As it happens, I scanned a number of old music manuscripts years ago to help other traditional music enthusiasts, but how can this be done at scale? One way forward comes from my college’s Archives Centre, which holds the personal papers of Sir Winston Churchill as well as other politicians and a number of eminent scientists. There the algorithm is that when someone requests a document, it’s also scanned and put online; so anything Alice looked at, Bob can look at too. This has raised some interesting technical problems around indexing and long-term archiving which I believe we have under control now, and I’m pleased to say that the Archives Centre is now celebrating its 50th anniversary.

It would also be helpful if old history books were as available online as they are in our library. Given that the purpose of copyright law is to maximise the amount of material that’s eventually available to all, I believe we should change the law to make continued copyright conditional on open access after an initial commercial period. Otherwise our historians’ output vanishes from the time that their books come off sale, to the time copyright expires maybe a century later.

My own Security Engineering book may show the way. With both the first edition in 2001 and the second edition in 2008, I put six chapters online for free at once, then released the others four years after publication. For the third edition, I negotiated an agreement with the publishers to put the chapters online for review as I wrote them. So the book came out by instalments, like Dickens’ novels, from April 2019 to September 2020. On the first of November 2020, all except seven sample chapters disappeared from this page for a period of 42 months; I’m afraid Wiley insisted on that. But after that, the whole book will be free online forever.

This also makes commercial sense. For both the 2001 and 2008 editions, paid-for sales of paper copies increased significantly after the whole book went online. People found my book online, liked what they saw, and then bought a paper copy rather than just downloading it all and printing out a thousand-odd pages. Open access after an exclusive period works for authors, for publishers and for history. It should be the norm.

How to Spread Disinformation with Unicode

There are many different ways to represent the same text in Unicode. We’ve previously exploited this encoding-visualization gap to craft imperceptible adversarial examples against text-based machine learning systems and invisible vulnerabilities in source code.

In our latest paper, we demonstrate another attack that exploits the same technique to target Google Search, Bing’s GPT-4-powered chatbot, and other text-based information retrieval systems.

Consider a snake-oil salesman trying to promote a bogus drug on social media. Sensible users would do a search on the alleged remedy before ordering it, and sites containing false information would normally be drowned out by genuine medical sources in modern search engine rankings. 

But what if our huckster uses a rare Unicode encoding to replace one character in the drug’s name on social media? If a user pastes this string into a search engine, it will throw up web pages with the same encoding. What’s more, these pages are very unlikely to appear in innocent queries.

The upshot is that an adversary who can manipulate a user into copying and pasting a string into a search engine can control the results seen by that user. They can hide such poisoned pages from regulators and others who are unaware of the magic encoding. These techniques can empower propagandists to convince victims that search engines validate their disinformation.