Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Owl, a new augmented password-authenticated key exchange protocol

In 2008, I wrote a blog to introduce J-PAKE, a password-authenticated key exchange (PAKE) protocol (joint work with Peter Ryan). The goal of that blog was to invite public scrutiny of J-PAKE. Sixteen years later, I am pleased to say that no attacks on J-PAKE have been found and that the protocol has been used in many real-world applications, e.g., Google Nest, ARM Mbed, Amazon Fire stick, Palemoon sync and Thread products.  

J-PAKE is a balanced PAKE, meaning that both sides must hold the same secret for mutual authentication. In the example of the J-PAKE-based IoT commissioning process (part of the Thread standard), a random password is generated to authenticate the key exchange process and is discarded afterwards. However, in some cases, it is desirable to store the password. For example, in a client-server application, the user knows a password, while the server stores a one-way transformation of the password. 

PAKE protocols designed for the above client-server setting are called augmented (as opposed to the balanced in the peer-to-peer setting). So far the only augmented PAKE protocol that has enjoyed wide use is SRP-6a, e.g., used in Apple’s iCloud, 1Password and Proton mail. SRP-6a Is the latest version of the 1998 SRP-3 scheme due to Wu after several revisions to address attacks. Limitations of SRP-6a are well known, including heuristic security, a lack of efficiency (due to the mandated use of a safe prime as the modulus) and no support for elliptic curve implementation. 

In 2018, an augmented PAKE scheme called OPAQUE was proposed by Jarecki, Krawczyk and Xu. In 2020, IETF selected OPAQUE as a candidate for standardization. A theoretical advantage promoted in favour of OPAQUE is the so-called pre-computation security. When the server is compromised, an offline dictionary attack to uncover the plaintext password is possible for both OPAQUE and SRP-6a. For OPAQUE, its pre-computation security means that the attacker can’t use a pre-computed table, whilst for SRP-6a, the attacker may use a pre-computed table, but it must be a unique table for each user, which requires a large amount of computation and storage. Therefore, the practical advantage provided by pre-computation security is limited. 

Apart from pre-computation security, OPAQUE has a few open issues which leave it unclear whether it will replace SRP-6a in practice. First, the original OPAQUE protocol defined in the 2018 paper leaks password update information to passive attackers, whilst SRP-6a doesn’t have this leakage. Furthermore, OPAQUE relies on a constant-time hash-to-curve function available for all elliptic curves, but details about the instantiation of this function remain to be established. Finally, the 2018 paper didn’t give a full specification of OPAQUE. In 2020, when OPAQUE was selected by IETF, its specification remained incomplete. The task of completing the spec was left as a post-selection exercise; today, it is still not finished.  

Motivated by the recognised limitations of SRP-6a and OPAQUE, we propose a new augmented PAKE scheme called Owl (joint work with Samiran Bag, Liqun Chen and Paul van Oorschot). Owl is obtained by efficiently adapting J-PAKE to an augmented setting, providing the augmented security against server compromise with yet lower computation than J-PAKE. To the best of our knowledge, Owl is the first augmented PAKE solution that provides systematic advantages over SRP-6a in terms of security, computation, round efficiency, message sizes, and cryptographic agility. 

On 5 March 2024, I gave a presentation on Owl at Financial Cryptography and Data Security 2024 (FC’24) in Curacao. The purpose of this blog is to invite public scrunity of Owl. See the Owl paper and the FC slides for further details. An open-source Java program that shows how Owl works in an elliptic curve setting is freely available. We hope security researchers and developers will find Owl useful, especially in password-based client-server settings where a PKI is unavailable (hence TLS doesn’t apply). Same as J-PAKE, Owl is not patented and is free to use.

Grasping at straw

Britain’s National Crime Agency has spent the last five years trying to undermine encryption, saying it might stop them arresting hundreds of men every month for downloading indecent images of children. Now they complain that most of the men they do prosecute escape jail. Eight in ten men convicted of image offences escaped an immediate prison sentence, and the NCA’s Director General Graeme Biggar describes this as “striking”.

I agree, although the conclusions I draw are rather different. In Chatcontrol or Child Protection? I explained how the NCA and GCHQ divert police resources from tackling serious contact offences, such as child rape and child murder, to much less serious secondary offences around images of historical abuse and even synthetic images. The structural reasons are simple enough: they favour centralised policing over local efforts, and electronic surveillance over community work.

One winner is the NCA, which apparently now has 200 staff tracing people associated with alarms raised automatically by Big Tech’s content surveillance, while the losers include Britain’s 43 local police forces. If 80% of the people arrested as a result of Mr Biggar’s activities don’t even merit any jail time, then my conclusion is that the Treasury should cut his headcount by at least 160, and give each Chief Constable an extra 3-4 officers instead. Frontline cops agree that too much effort goes into image offences and not enough into the more serious contact crimes.

Mr Biggar argues that Facebook is wicked for turning on end-to-end encryption in Facebook Messenger, as won’t be able to catch as many bad men in future. But if encryption stops him wasting police time, well done Zuck! Mr Biggar also wants Parliament to increase the penalties. But even though Onan was struck dead by God for spilling his seed upon the ground, I hope we can have more rational priorities for criminal law enforcement in the 21st century.

How hate sites evade the censor

On Tuesday we had a seminar from Liz Fong-Jones entitled “Reverse engineering hate” about how she, and a dozen colleagues, have been working to take down a hate speech forum called Kiwi Farms. We already published a measurement study of their campaign, which forced the site offline repeatedly in 2022. As a result of that paper, Liz contacted us and this week she told us the inside story.

The forum in question specialises in personal attacks, and many of their targets are transgender. Their tactics include doxxing their victims, trawling their online presence for material that is incriminating or can be misrepresented as such, putting doctored photos online, and making malicious complaints to victims’ employers and landlords. They describe this as “milking people for laughs”. After a transgender activist in Canada was swatted, about a dozen volunteers got together to try to take the site down. They did this by complaining to the site’s service providers and by civil litigation.

This case study is perhaps useful for the UK, where the recent Online Safety Bill empowers Ofcom to do just this – to use injunctions in the civil courts to take down unpleasant websites.

The Kiwi Farms operator has for many months resisted the activists by buying the services required to keep his website up, including his data centre floor space, his transit, his AS, his DNS service and his DDoS protection, through a multitude of changing shell companies. The current takedown mechanisms require a complainant to first contact the site operator; he publishes complaints, so his followers can heap abuse on them. The takedown crew then has to work up a chain of suppliers. Their processes are usually designed to stall complainants, so that getting through to a Tier 1 and getting them to block a link takes weeks rather than days. And this assumes that the takedown crew includes experienced sysadmins who can talk the language of the service providers, to whose technical people they often have direct access; without that, it would take months rather than weeks. The net effect is that it took a dozen volunteers thousands of hours over six months from October 22 to April 23 to get all the Tier 1s to drop KF, and over $100,000 in legal costs. If the bureaucrats at Ofcom are going to do this work for a living, without the skills and access of Liz and her team, it could be harder work than they think.

Liz’s seminar slides are here.

How to Spread Disinformation with Unicode

There are many different ways to represent the same text in Unicode. We’ve previously exploited this encoding-visualization gap to craft imperceptible adversarial examples against text-based machine learning systems and invisible vulnerabilities in source code.

In our latest paper, we demonstrate another attack that exploits the same technique to target Google Search, Bing’s GPT-4-powered chatbot, and other text-based information retrieval systems.

Consider a snake-oil salesman trying to promote a bogus drug on social media. Sensible users would do a search on the alleged remedy before ordering it, and sites containing false information would normally be drowned out by genuine medical sources in modern search engine rankings. 

But what if our huckster uses a rare Unicode encoding to replace one character in the drug’s name on social media? If a user pastes this string into a search engine, it will throw up web pages with the same encoding. What’s more, these pages are very unlikely to appear in innocent queries.

The upshot is that an adversary who can manipulate a user into copying and pasting a string into a search engine can control the results seen by that user. They can hide such poisoned pages from regulators and others who are unaware of the magic encoding. These techniques can empower propagandists to convince victims that search engines validate their disinformation.

The Pre-play Attack in Real Life

Recently I was contacted by a Falklands veteran who was a victim of what appears to have been a classic pre-play attack; his story is told here.

Almost ten years ago, after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.

In the years since then, I’ve used this as a teaching example both in tutorial talks and in university lectures. A payment card user has no trustworthy user interface, so the PIN entry device can present any transaction, or series of transactions, for authentication, and the customer is none the wiser. The mere fact that a customer’s card authenticated a transaction does not imply that the customer mandated that payment.

Payment by phone should eventually fix this, but meantime the frauds continue. They’re particularly common in nightlife establishments, both here and overseas. In the first big British case, the Spearmint Rhino in Bournemouth had special conditions attached to its license for some time after a series of frauds; a second case affected a similar establishment in Soho; there have been others. Overseas, we’ve seen cases affecting UK cardholders in Poland and the Baltic states. The technical modus operandi can involve a tampered terminal, a man-in-the-middle device or an overlay SIM card.

By now, such attacks are very well-known and there really isn’t any excuse for banks pretending that they don’t exist. Yet, in this case, neither the first responder at Barclays nor the case handler at the Financial Ombudsman Service seemed to understand such frauds at all. Multiple transactions against one cardholder, coming via different merchant accounts, and with delay, should have raised multiple red flags. But the banks have gone back to sleep, repeating the old line that the card was used and the customer PIN was entered, so it must all be the customer’s fault. This is the line they took twenty years ago when chip and pin was first introduced, and indeed thirty years ago when we were suffering ATM fraud at scale from mag-strip copying. The banks have learned nothing, except perhaps that they can often get away with lying about the security of their systems. And the ombudsman continues to claim that it’s independent.

Interop: One Protocol to Rule Them All?

Everyone’s worried that the UK Online Safety Bill and the EU Child Sex Abuse Regulation will put an end to end-to-end encryption. But might a law already passed by the EU have the same effect?

The Digital Markets Act ruled that users on different platforms should be able to exchange messages with each other. This opens up a real Pandora’s box. How will the networks manage keys, authenticate users, and moderate content? How much metadata will have to be shared, and how?

In our latest paper, One Protocol to Rule Them All? On Securing Interoperable Messaging, we explore the security tensions, the conflicts of interest, the usability traps, and the likely consequences for individual and institutional behaviour.

Interoperability will vastly increase the attack surface at every level in the stack – from the cryptography up through usability to commercial incentives and the opportunities for government interference.

Twenty-five years ago, we warned that key escrow mechanisms would endanger cryptography by increasing complexity, even if the escrow keys themselves can be kept perfectly secure. Interoperability is complexity on steroids.

Bugs still considered harmful

A number of governments are trying to mandate surveillance software in devices that support end-to-end encrypted chat; the EU’s CSA Regulation and the UK’s Online Safety bill being two prominent current examples. Colleagues and I wrote Bugs in Our Pockets in 2021 to point out what was likely to go wrong; GCHQ responded with arguments about child protection, which I countered in my paper Chat Control or Child Protection.

As lawmakers continue to discuss the policy, the latest round in the technical argument comes from the Rephrain project, which was tasked with evaluating five prototypes built with money from GCHQ and the Home Office. Their report may be worth a read.

One contender looks for known-bad photos and videos with software on both client and server, and is the only team with access to CSAM for training or testing (it has the IWF as a partner). However it has inadequate controls both against scope creep, and against false positives and malicious accusations.

Another is an E2EE communications tool with added profanity filter and image scanning, linked to age verification, with no safeguards except human moderation at the reporting server.

The other three contenders are nudity detectors with various combinations of age verification or detection, and of reporting to parents or service providers.

None of these prototypes comes close to meeting reasonable requirements for efficacy and privacy. So the project can be seen as empirical support for the argument we made in “Bugs”, namely that doing surveillance while respecting privacy is really hard.

ML models must also think about trusting trust

Our latest paper demonstrates how a Trojan or backdoor can be inserted into a machine-learning model by the compiler. In his Turing Award lecture, Ken Thompson explained how this could be done to an operating system, and in previous work we’d shown you you can subvert a model by manipulating the order in which training data are presented. Could these ideas be combined?

The answer is yes. The trick is for the compiler to recognise what sort of model it’s compiling – whether it’s processing images or text, for example – and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented – in short, everything.

The Online Safety Bill: Reboot it, or Shoot it?

Yesterday I took part in a panel discussion organised by the Adam Smith Institute on the Online Safety Bill. This sprawling legislative monster has outlasted not just six Secretaries of State for Culture, Media and Sport, but two Prime Ministers. It’s due to slither back to Parliament in November, so we wrote a Policy Brief that explains what it tries to do and some of the things it gets wrong.

Some of the bill’s many proposals command wide support – for example, that online services should enable users to contact them effectively to report illegal material, which should be removed quickly. At present, only copyright owners and the police seem to be able to get the attention of the major platforms; ordinary people, including young people, should also be able to report unlawful things and have them taken down quickly. Here, the UK government intends to bind only large platforms like Facebook and Twitter. We propose extending the duty to gaming platforms too. Kids just aren’t on Facebook any more.

The Bill also tries to reignite the crypto wars by empowering Ofcom to require services to use “accredited technology” (read: software written by GCHQ contractors) to scan your WhatsApp messages. The idea that you can catch violent criminals such as child abusers and terrorists by bulk text scanning is entirely implausible; the error rates are so high that the police would swamped with false positives. Quite apart from that, bulk intercept has always been illegal in Britain, and would also contravene the European Convention on Human Rights, to which we are still a signatory despite Brexit. This power to mandate client-side scanning has to be scrapped, a move that quite a few MPs already support.

But what should we do instead about illegal images of minors, and about violent online political extremism? More local policing would be better; we explain why. This is informed by our work on the link between violent extremism and misogyny, as well as our analysis of a similar proposal in the EU. So it is welcome that the government is hiring more police officers. What’s needed now is a greater focus on family violence, which is the root cause of most child abuse, rather than using child abuse as an excuse to increase the central agencies’ surveillance powers and budgets.

In our Policy Brief, we also discuss content moderation, and suggest that it be guided by the principle of minimising cruelty. One of the other panelists, Graham Smith, discussed the legal difficulties of regulating speech and made a strong case that restrictions (such as copyright, libel, incitement and harassment) should be set out in primary legislation rather than farmed out to private firms, as at present, or to a regulator, as the Bill proposes. Given that most of the bad stuff is illegal already, why not make a start by enforcing the laws we already have, as they do in Germany? British policing efforts online range from the pathetic to the outrageous. It looks like Parliament will have some interesting decisions to take when the bill comes back.