Monthly Archives: March 2018

Tracing stolen bitcoin

A new Computerphile video explains how we’ve worked out a much better way to track stolen bitcoin. Previous attempts to do this had got entangled in the problem of dealing with transactions that split bitcoin into change, or that consolidate smaller sums into larger ones, and with mining fees. The answer comes from an unexpected direction: a legal precedent in 1816. We discussed the technical details last week at the Security Protools Workshop; a preprint of our paper is here.

Previous attempts to track tainted coins had used either the “poison” or the “haircut” method. Suppose I open a new address and pay into it three stolen bitcoin followed by seven freshly-mined ones. Then under poison, the output is ten stolen bitcoin, while under haircut it’s ten bitcoin that are marked 30% stolen. After thousands of blocks, poison tainting will blacklist millions of addresses, while with haircut the taint gets diffused, so neither is very effective at tracking stolen property. Bitcoin due-diligence services supplant haircut taint tracking with AI/ML, but the results are still not satisfactory.

We discovered that, back in 1816, the High Court had to tackle this problem in Clayton’s case, which involved the assets and liabilities of a bank that had gone bust. The court ruled that money must be tracked through accounts on the basis of first-in, first out (FIFO); the first penny into an account goes to satisfy the first withdrawal, and so on.

Ilia Shumailov has written software that applies FIFO tainting to the blockchain and the results are impressive, with a massive improvement in precision. What’s more, FIFO taint tracking is lossless, unlike haircut; so in addition to tracking a stolen coin forward to find where it’s gone, you can start with any UTXO and trace it backwards to see its entire ancestry. It’s not just good law; it’s good computer science too.

We plan to make this software public, so that everybody can use it and everybody can see where the bad bitcoins are going.

I’m giving a further talk on Tuesday at a financial-risk conference in Paris.

We will make you like our research

This is the title of a paper that appeared today in PLOS One. It describes a tool we developed initially to assess the gullibility of cybercrime victims, and which we now present as a general-purpose psychometric of individual susceptibility to persuasion. An early version was described three years ago here and here. Since then we have developed it significantly and used it in experiments on cybercrime victims, Facebook users and IT security officers.

We investigated the effects on persuasion of a subject’s need for cognition, need for consistency, sensation seeking, self-control, consideration of future consequences, need for uniqueness, risk preferences and social influence. The strongest factor was consideration of future consequences, or “premeditation” for short.

We offer a full psychometric test in STP-II with 54 items spanning 10 subscales, and a shorter STP-II-B with 30 items to measure first-order factors, but that omits second-order constructs for brevity. The scale is here with the B items marked, and here is a live instance of the survey for you to play with. Once you complete it, there’s an on-the-fly interpretation at the end. You don’t have to give your name and we don’t record your IP address.

We invite everyone to use our STP-II scale – not just in security contexts, but also in consumer and marketing psychology and anywhere else it might possibly be helpful. Do let us know what you find!

Making security sustainable

Making security sustainable is a piece I wrote for Communications of the ACM and has just appeared in the Privacy and security column of their March issue. Now that software is appearing in durable goods, such as cars and medical devices, that can kill us, software engineering will have to come of age.

The notion that software engineers are not responsible for things that go wrong will be laid to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve. And as security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection.

Perhaps the biggest challenge will be durability. At present we have a hard time patching a phone that’s three years old. Yet the average age of a UK car at scrappage is about 14 years, and rising all the time; cars used to last 100,000 miles in the 1980s but now keep going for nearer 200,000. As the embedded carbon cost of a car is about equal to that of the fuel it will burn over its lifetime, we just can’t afford to scrap cars after five years, as do we laptops.

For durable safety-critical goods that incorporate software, the long-term software maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost.

This paper follows on from our earlier work for the European Commission on what happens to safety regulation in the future Internet of Things.