Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

How to Spread Disinformation with Unicode

There are many different ways to represent the same text in Unicode. We’ve previously exploited this encoding-visualization gap to craft imperceptible adversarial examples against text-based machine learning systems and invisible vulnerabilities in source code.

In our latest paper, we demonstrate another attack that exploits the same technique to target Google Search, Bing’s GPT-4-powered chatbot, and other text-based information retrieval systems.

Consider a snake-oil salesman trying to promote a bogus drug on social media. Sensible users would do a search on the alleged remedy before ordering it, and sites containing false information would normally be drowned out by genuine medical sources in modern search engine rankings. 

But what if our huckster uses a rare Unicode encoding to replace one character in the drug’s name on social media? If a user pastes this string into a search engine, it will throw up web pages with the same encoding. What’s more, these pages are very unlikely to appear in innocent queries.

The upshot is that an adversary who can manipulate a user into copying and pasting a string into a search engine can control the results seen by that user. They can hide such poisoned pages from regulators and others who are unaware of the magic encoding. These techniques can empower propagandists to convince victims that search engines validate their disinformation.

The Pre-play Attack in Real Life

Recently I was contacted by a Falklands veteran who was a victim of what appears to have been a classic pre-play attack; his story is told here.

Almost ten years ago, after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.

In the years since then, I’ve used this as a teaching example both in tutorial talks and in university lectures. A payment card user has no trustworthy user interface, so the PIN entry device can present any transaction, or series of transactions, for authentication, and the customer is none the wiser. The mere fact that a customer’s card authenticated a transaction does not imply that the customer mandated that payment.

Payment by phone should eventually fix this, but meantime the frauds continue. They’re particularly common in nightlife establishments, both here and overseas. In the first big British case, the Spearmint Rhino in Bournemouth had special conditions attached to its license for some time after a series of frauds; a second case affected a similar establishment in Soho; there have been others. Overseas, we’ve seen cases affecting UK cardholders in Poland and the Baltic states. The technical modus operandi can involve a tampered terminal, a man-in-the-middle device or an overlay SIM card.

By now, such attacks are very well-known and there really isn’t any excuse for banks pretending that they don’t exist. Yet, in this case, neither the first responder at Barclays nor the case handler at the Financial Ombudsman Service seemed to understand such frauds at all. Multiple transactions against one cardholder, coming via different merchant accounts, and with delay, should have raised multiple red flags. But the banks have gone back to sleep, repeating the old line that the card was used and the customer PIN was entered, so it must all be the customer’s fault. This is the line they took twenty years ago when chip and pin was first introduced, and indeed thirty years ago when we were suffering ATM fraud at scale from mag-strip copying. The banks have learned nothing, except perhaps that they can often get away with lying about the security of their systems. And the ombudsman continues to claim that it’s independent.

Interop: One Protocol to Rule Them All?

Everyone’s worried that the UK Online Safety Bill and the EU Child Sex Abuse Regulation will put an end to end-to-end encryption. But might a law already passed by the EU have the same effect?

The Digital Markets Act ruled that users on different platforms should be able to exchange messages with each other. This opens up a real Pandora’s box. How will the networks manage keys, authenticate users, and moderate content? How much metadata will have to be shared, and how?

In our latest paper, One Protocol to Rule Them All? On Securing Interoperable Messaging, we explore the security tensions, the conflicts of interest, the usability traps, and the likely consequences for individual and institutional behaviour.

Interoperability will vastly increase the attack surface at every level in the stack – from the cryptography up through usability to commercial incentives and the opportunities for government interference.

Twenty-five years ago, we warned that key escrow mechanisms would endanger cryptography by increasing complexity, even if the escrow keys themselves can be kept perfectly secure. Interoperability is complexity on steroids.

Bugs still considered harmful

A number of governments are trying to mandate surveillance software in devices that support end-to-end encrypted chat; the EU’s CSA Regulation and the UK’s Online Safety bill being two prominent current examples. Colleagues and I wrote Bugs in Our Pockets in 2021 to point out what was likely to go wrong; GCHQ responded with arguments about child protection, which I countered in my paper Chat Control or Child Protection.

As lawmakers continue to discuss the policy, the latest round in the technical argument comes from the Rephrain project, which was tasked with evaluating five prototypes built with money from GCHQ and the Home Office. Their report may be worth a read.

One contender looks for known-bad photos and videos with software on both client and server, and is the only team with access to CSAM for training or testing (it has the IWF as a partner). However it has inadequate controls both against scope creep, and against false positives and malicious accusations.

Another is an E2EE communications tool with added profanity filter and image scanning, linked to age verification, with no safeguards except human moderation at the reporting server.

The other three contenders are nudity detectors with various combinations of age verification or detection, and of reporting to parents or service providers.

None of these prototypes comes close to meeting reasonable requirements for efficacy and privacy. So the project can be seen as empirical support for the argument we made in “Bugs”, namely that doing surveillance while respecting privacy is really hard.

ML models must also think about trusting trust

Our latest paper demonstrates how a Trojan or backdoor can be inserted into a machine-learning model by the compiler. In his Turing Award lecture, Ken Thompson explained how this could be done to an operating system, and in previous work we’d shown you you can subvert a model by manipulating the order in which training data are presented. Could these ideas be combined?

The answer is yes. The trick is for the compiler to recognise what sort of model it’s compiling – whether it’s processing images or text, for example – and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented – in short, everything.

The Online Safety Bill: Reboot it, or Shoot it?

Yesterday I took part in a panel discussion organised by the Adam Smith Institute on the Online Safety Bill. This sprawling legislative monster has outlasted not just six Secretaries of State for Culture, Media and Sport, but two Prime Ministers. It’s due to slither back to Parliament in November, so we wrote a Policy Brief that explains what it tries to do and some of the things it gets wrong.

Some of the bill’s many proposals command wide support – for example, that online services should enable users to contact them effectively to report illegal material, which should be removed quickly. At present, only copyright owners and the police seem to be able to get the attention of the major platforms; ordinary people, including young people, should also be able to report unlawful things and have them taken down quickly. Here, the UK government intends to bind only large platforms like Facebook and Twitter. We propose extending the duty to gaming platforms too. Kids just aren’t on Facebook any more.

The Bill also tries to reignite the crypto wars by empowering Ofcom to require services to use “accredited technology” (read: software written by GCHQ contractors) to scan your WhatsApp messages. The idea that you can catch violent criminals such as child abusers and terrorists by bulk text scanning is entirely implausible; the error rates are so high that the police would swamped with false positives. Quite apart from that, bulk intercept has always been illegal in Britain, and would also contravene the European Convention on Human Rights, to which we are still a signatory despite Brexit. This power to mandate client-side scanning has to be scrapped, a move that quite a few MPs already support.

But what should we do instead about illegal images of minors, and about violent online political extremism? More local policing would be better; we explain why. This is informed by our work on the link between violent extremism and misogyny, as well as our analysis of a similar proposal in the EU. So it is welcome that the government is hiring more police officers. What’s needed now is a greater focus on family violence, which is the root cause of most child abuse, rather than using child abuse as an excuse to increase the central agencies’ surveillance powers and budgets.

In our Policy Brief, we also discuss content moderation, and suggest that it be guided by the principle of minimising cruelty. One of the other panelists, Graham Smith, discussed the legal difficulties of regulating speech and made a strong case that restrictions (such as copyright, libel, incitement and harassment) should be set out in primary legislation rather than farmed out to private firms, as at present, or to a regulator, as the Bill proposes. Given that most of the bad stuff is illegal already, why not make a start by enforcing the laws we already have, as they do in Germany? British policing efforts online range from the pathetic to the outrageous. It looks like Parliament will have some interesting decisions to take when the bill comes back.

Talking Trojan: Analyzing an Industry-Wide Disclosure

Talking Trojan: Analyzing an Industry-Wide Disclosure tells the story of what happened after we discovered the Trojan Source vulnerability, which broke almost all computer languages, and the Bad Characters vulnerability, which broke almost all large NLP tools. This provided a unique opportunity to measure software maintenance in action. Who patched quickly, reluctantly, or not at all? Who paid bug bounties, and who dodged liability? What parts of the disclosure ecosystem work well, which are limping along, and which are broken?

Security papers typically describe a vulnerability but say little about how it was disclosed and patched. And while disclosing one vulnerability to a single vendor can be hard enough, modern supply chains multiply the number of affected parties leading to an exponential increase in the complexity of the disclosure. One vendor will want an in-house web form, another will use an outsourced bug bounty platform, still others will prefer emails, and *nix OS maintainers will use a very particular PGP mailing list. Governments sort-of want to assist with disclosures but prefer to use yet another platform. Many open-source projects lack an embargoed disclosure process, but it is often in the interest of commercial operating system maintainers to write embargoed patches – if you can get hold of the right people.

A vulnerability that affected many different products at the same time and in similar ways gave us a unique chance to observe the finite-impulse response of this whole complex system. Our observations reveal a number of weaknesses, such as a potentially dangerous misalignment of incentives between commercially sponsored bug bounty programs and multi-vendor coordinated disclosure platforms. We suggest tangible changes that could strengthen coordinated disclosure globally.

We also hope to inspire other researchers to publish the mechanics of individual disclosures, so that we can continue to measure and improve the critical ecosystem on which we rely as our main defense against growing supply chain threats. In the meantime, our paper can be found here, and will appear in SCORED ‘22 this November.

The Dynamics of Industry-wide Disclosure

Last year, we disclosed two related vulnerabilities that broke a wide range of systems. In our Bad Characters paper, we showed how to use Unicode tricks – such as homoglyphs and bidi characters – to mislead NLP systems. Our Trojan Source paper showed how similar tricks could be used to make source code look one way to a human reviewer, and another way to a compiler, opening up a wide range of supply-chain attacks on critical software. Prior to publication, we disclosed our findings to four suppliers of large NLP systems, and nineteen suppliers of software development tools. So how did industry respond?

We were invited to give the keynote talk this year at LangSec, and the video is now available. In it we describe not just the Bad Characters and Trojan Source vulnerabilities, but the large natural experiment created by their disclosure. The Trojan Source vulnerability affected most compilers, interpreters, code editors and code repositories; this enabled us to compare responses by firms versus nonprofits and by firms that managed their own response versus those who outsourced it. The interaction between bug bounty programs, government disclosure assistance, peer review and press coverage was interesting. Most of the affected development teams took action, though some required a bit of prodding.

The response by the NLP maintainers was much less enthusiastic. By the time we gave this talk, only Google had done anything – though we now hear that Microsoft is now also working on a fix. The reasons for this responsibility gap need to be understood better. They may include differences in culture between C coders and data scientists; the greater costs and delays in the build-test-deploy cycle for large ML models; and the relative lack of press interest in attacks on ML systems. If many of our critical systems start to include ML components that are less maintainable, will the ML end up being the weakest link?

Morello chip on board

Formal CHERI: rigorous engineering and design-time proof of full-scale architecture security properties

Memory safety bugs continue to be a major source of security vulnerabilities, with their root causes ingrained in the industry:

  • the C and C++ systems programming languages that do not enforce memory protection, and the huge legacy codebase written in them that we depend on;
  • the legacy design choices of hardware that provides only coarse-grain protection mechanisms, based on virtual memory; and
  • test-and-debug development methods, in which only a tiny fraction of all possible execution paths can be checked, leaving ample unexplored corners for exploitable bugs.

Over the last twelve years, the CHERI project has been working on addressing the first two of these problems by extending conventional hardware Instruction-Set Architectures (ISAs) with new architectural features to enable fine-grained memory protection and highly scalable software compartmentalisation, prototyped first as CHERI-MIPS and CHERI-RISC-V architecture designs and FPGA implementations, with an extensive software stack ported to run above them.

The academic experimental results are very promising, but achieving widespread adoption of CHERI needs an industry-scale evaluation of a high-performance silicon processor implementation and software stack. To that end, Arm have developed Morello, a CHERI-enabled prototype architecture (extending Armv8.2-A), processor (adapting the high-performance Neoverse N1 design), system-on-chip (SoC), and development board, within the UKRI Digital Security by Design (DSbD) Programme (see our earlier blog post on Morello). Morello is now being evaluated in a range of academic and industry projects.

Morello desktopMorello chip on board

However, how do we ensure that such a new architecture actually provides the security guarantees it aims to provide? This is crucial: any security flaw in the architecture will be present in any conforming hardware implementation, quite likely impossible to fix or work around after deployment.

In this blog post, we describe how we used rigorous engineering methods to provide high assurance of key security properties of CHERI architectures, with machine-checked mathematical proof, as well as to complement and support traditional design and development workflows, e.g. by automatically generating test suites. This is addressing the third problem, showing that, by judicious use of rigorous semantics at design time, we can do much better than test-and-debug development.
Continue reading Formal CHERI: rigorous engineering and design-time proof of full-scale architecture security properties