Category Archives: Usability

The Dynamics of Industry-wide Disclosure

Last year, we disclosed two related vulnerabilities that broke a wide range of systems. In our Bad Characters paper, we showed how to use Unicode tricks – such as homoglyphs and bidi characters – to mislead NLP systems. Our Trojan Source paper showed how similar tricks could be used to make source code look one way to a human reviewer, and another way to a compiler, opening up a wide range of supply-chain attacks on critical software. Prior to publication, we disclosed our findings to four suppliers of large NLP systems, and nineteen suppliers of software development tools. So how did industry respond?

We were invited to give the keynote talk this year at LangSec, and the video is now available. In it we describe not just the Bad Characters and Trojan Source vulnerabilities, but the large natural experiment created by their disclosure. The Trojan Source vulnerability affected most compilers, interpreters, code editors and code repositories; this enabled us to compare responses by firms versus nonprofits and by firms that managed their own response versus those who outsourced it. The interaction between bug bounty programs, government disclosure assistance, peer review and press coverage was interesting. Most of the affected development teams took action, though some required a bit of prodding.

The response by the NLP maintainers was much less enthusiastic. By the time we gave this talk, only Google had done anything – though we now hear that Microsoft is now also working on a fix. The reasons for this responsibility gap need to be understood better. They may include differences in culture between C coders and data scientists; the greater costs and delays in the build-test-deploy cycle for large ML models; and the relative lack of press interest in attacks on ML systems. If many of our critical systems start to include ML components that are less maintainable, will the ML end up being the weakest link?

Trojan Source: Invisible Vulnerabilities

Today we are releasing Trojan Source: Invisible Vulnerabilities, a paper describing cool new tricks for crafting targeted vulnerabilities that are invisible to human code reviewers.

Until now, an adversary wanting to smuggle a vulnerability into software could try inserting an unobtrusive bug in an obscure piece of code. Critical open-source projects such as operating systems depend on human review of all new code to detect malicious contributions by volunteers. So how might wicked code evade human eyes?

We have discovered ways of manipulating the encoding of source code files so that human viewers and compilers see different logic. One particularly pernicious method uses Unicode directionality override characters to display code as an anagram of its true logic. We’ve verified that this attack works against C, C++, C#, JavaScript, Java, Rust, Go, and Python, and suspect that it will work against most other modern languages.

This potentially devastating attack is tracked as CVE-2021-42574, while a related attack that uses homoglyphs – visually similar characters – is tracked as CVE-2021-42694. This work has been under embargo for a 99-day period, giving time for a major coordinated disclosure effort in which many compilers, interpreters, code editors, and repositories have implemented defenses.

This attack was inspired by our recent work on Imperceptible Perturbations, where we use directionality overrides, homoglyphs, and other Unicode features to break the text-based machine learning systems used for toxic content filtering, machine translation, and many other NLP tasks.

More information about the Trojan Source attack can be found at trojansource.codes, and proofs of concept can also be found on GitHub. The full paper can be found here.

Security and Human Behaviour 2020

I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.

Open Source Summit Europe

I am at the 2018 Open Source Summit Europe in Edinburgh where I’ll be speaking about Hyperledger projects. In follow-ups to this post, I’ll live-blog security related talks and workshops.

The first workshop of the summit I attended was a crash course introduction to EdgeX Foundry by Jim White, the organization’s chief architect. EdgeX Foundry is an open source, vendor neutral software framework for IoT edge computing. One of the primary challenges that it faces is the sheer number of protocols and standards that need to be supported in the IoT space, both on the south side (various sensors and actuators) as well as the north side (cloud providers, on-premise servers). To do this, EdgeX uses a microservices based architecture where all components interact via configurable APIs and developers can choose to customize any component. While this architecture does help to alleviate the scaling issue, it raises interesting questions with respect to API security and device management. How do we ensure the integrity of the control and command modules when those modules themselves are federated across multiple third-party-contributed microservices? The team at EdgeX is aware of these issues and is a major theme of focus for their upcoming releases.

Privacy for Tigers

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos – and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

So we have been doing some work on this, and presented some initial ideas via an invited talk at Usenix Security in August. A video of the talk is now online.

The most serious poaching threats involve insiders: game guards who go over to the dark side, corrupt officials, and (now) the compromise of data and tools assembled for scientific and conservation purposes. Aggregation of data makes things worse; I might not care too much about a single geotagged photo, but a corpus of thousands of such photos tells a poacher where to set his traps. Cool new AI tools for recognising individual animals can make his work even easier. So people developing systems to help in the conservation mission need to start paying attention to computer security. Compartmentation is necessary, but there are hundreds of conservancies and game reserves, many of which are mutually mistrustful; there is no central authority at Fort Meade to manage classifications and clearances. Data sharing is haphazard and poorly understood, and the limits of open data are only now starting to be recognised. What sort of policies do we need to support, and what sort of tools do we need to create?

This is joint work with Tanya Berger-Wolf of Wildbook, one of the wildlife data aggregation sites, which is currently redeveloping its core systems to incorporate and test the ideas we describe. We are also working to spread the word to both conservators and online service firms.

Compartmentation is hard, but the Big Data playbook makes it harder still

A new study of Palantir’s systems and business methods makes sobering reading for people interested in what big data means for privacy.

Privacy scales badly. It’s OK for the twenty staff at a medical practice to have access to the records of the ten thousand patients registered there, but when you build a centralised system that lets every doctor and nurse in the country see every patient’s record, things go wrong. There are even sharper concerns in the world of intelligence, which agencies try to manage using compartmentation: really sensitive information is often put in a compartment that’s restricted to a handful of staff. But such systems are hard to build and maintain. Readers of my book chapter on the subject will recall that while US Naval Intelligence struggled to manage millions of compartments, the CIA let more of their staff see more stuff – whereupon Aldrich Ames betrayed their agents to the Russians.

After 9/11, the intelligence community moved towards the CIA model, in the hope that with fewer compartments they’d be better able to prevent future attacks. We predicted trouble, and Snowden duly came along. As for civilian agencies such as Britain’s NHS and police, no serious effort was made to protect personal privacy by compartmentation, with multiple consequences.

Palantir’s systems were developed to help the intelligence community link, fuse and visualise data from multiple sources, and are now sold to police forces too. It should surprise no-one to learn that they do not compartment information properly, whether within a single force or even between forces. The organised crime squad’s secret informants can thus become visible to traffic cops, and even to cops in other forces, with tragically predictable consequences. Fixing this is hard, as Palantir’s market advantage comes from network effects and the resulting scale. The more police forces they sign up the more data they have, and the larger they grow the more third-party databases they integrate, leaving private-sector competitors even further behind.

This much we could have predicted from first principles but the details of how Palantir operates, and what police forces dislike about it, are worth studying.

What might be the appropriate public-policy response? Well, the best analysis of competition policy in the presence of network effects is probably Lina Khan’s, and her analysis would suggest in this case that police intelligence should be a regulated utility. We should develop those capabilities that are actually needed, and the right place for them is the Police National Database. The public sector is better placed to commit the engineering effort to do compartmentation properly, both there and in other applications where it’s needed, such as the NHS. Good engineering is expensive – but as the Los Angeles Police Department found, engaging Palantir can be more expensive still.

Testing the usability of offline mobile payments

Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?

Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?

Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:

DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143

Pico in the Wild: Replacing Passwords, One Site at a Time

The Pico team have just returned from Paris, where Kat Krol presented at both EuroS&P and the affiliated EuroUSEC workshop on usable security.

Pico is an ERC-funded project, led by Frank Stajano, to liberate humanity from passwords. It lets you log into devices and websites without having to remember any secrets. It relies on “something you have”: in the current prototype, that’s your smartphone, potentially coupled with other wearables, though high-security niche applications could use a dedicated token instead.

Our latest paper presents a new study performed in collaboration with the Gyazo.com website, where we invited users to test out the Pico authentication app for logging in to the site. A QR code was displayed on the Gyazo login page for the duration of the trial, allowing users to access their images simply by scanning the QR code and avoiding the need to enter a username or password.

Participants used Pico for two weeks, during which time we collected feedback using telemetry data, questionnaires and phone interviews. Our aim was to conduct a trial with high ecological validity, avoiding the usual lab-based studies which can run the risk of collecting intentions rather than actual behaviour.

Some of the key results from the paper are that participants liked the idea of Pico and generally found it to be secure and less cognitively demanding than passwords. However, some disliked the need to scan QR codes and suggested replacing them with another modality of interaction. There was also a general consensus that participants wanted to see Pico extended for use with more sites. The pain of password entry on any particular site isn’t so great, but when you scale it up to the plurality of sites we all routinely have to deal with, it becomes a much more serious burden.

The study attracted participants from all over the world, including Brazil, Greece, Japan, Latvia, Spain and the United States. However, it also highlighted some of the challenges of performing experimental studies ‘in the wild’. From an initial pool of seven million potential participants – the number of active users of the Gyazo photo sharing site – after reducing down to those users who entered passwords more regularly on the site and who were willing to participate in the study, we eventually recruited twelve participants to test out Pico. Not as many as we’d hoped for.

In the paper we discuss some of the reasons for this, including the fact that popular websites attempt to minimise the annoyance of password entry through the use of mechanisms such as long-lived cookies and dedicated apps.

While the purpose of the paper is to explore usable security and end-user reactions, it also allowed us to test out the Pico nginx reverse-proxy lens. Using this we could deploy Pico to the Gyazo website as in-page Javascript, demonstrating seamless deployment (zero changes to the backend Gyazo code) and removing the need for the user to install a browser plugin. The tech worked like a charm throughout the trial.

The paper is available from the Internet Society and the abstract for Kat’s short talk, covering future Pico evaluation studies, is available from the EuroS&P website.