There is a report out today from the European economics think-tank CEPS on how responsible vulnerability disclosure might be harmonised across Europe. I was one of the advisers to this effort which involved not just academics and NGOs but also industry.
It was inspired in part by earlier work reported here on standardisation and certification in the Internet of Things. What happens to car safety standards once cars get patched once a month, like phones and laptops? The answer is not just that safety becomes a moving target, rather than a matter of pre-market testing; we also need a regime whereby accidents, hazards, vulnerabilities and security breaches get reported. That will mean responsible disclosure not just to OEMs and component vendors, but also to safety regulators, standards bodies, traffic police, insurers and accident victims. If we get it right, we could have a learning system that becomes steadily safer and more secure. But we could also get it badly wrong.
Getting it might will involve significant organisational and legal changes, which we discussed in our earlier report and which we carry forward here. We didn’t get everything we wanted; for example, large software vendors wouldn’t support our recommendation to extend the EU Product Liability Directive to services. Nonetheless, we made some progress, so today’s report can be seen a second step on the road.
I’m at the seventeenth workshop on the economics of information security, hosted by the University of Innsbruck. I’ll be liveblogging the sessions in followups to this post.
Bitcoin Redux explains what’s going wrong in the world of cryptocurrencies. The bitcoin exchanges are developing into a shadow banking system, which do not give their customers actual bitcoin but rather display a “balance” and allow them to transact with others. However if Alice sends Bob a bitcoin, and they’re both customers of the same exchange, it just adjusts their balances rather than doing anything on the blockchain. This is an e-money service, according to European law, but is the law enforced? Not where it matters. We’ve been looking at the details.
In March we wrote about how to trace stolen bitcoin, describing new tools that enable us to track crime proceeds on the blockchain with more precision than before. We waited for victims of bitcoin theft and fraud to come to us, so we could test our tools on real cases. However in most of them it was not clear that the victims had ever owned any bitcoin at all.
There are basically three ways you could try to hold a bitcoin. You could buy one from an exchange and get them to send it to a wallet you host yourself, but almost nobody does that.
You could buy one from an exchange and get the exchange to keep the keys for you, so that the asset was unique to you and they were only guarding it for you – just like when you buy gold and the bullion merchant then charges you a fee to guard your gold in his vault. If the merchant goes bust, you can turn up at the vault with your receipt and demand your gold back.
Or you could buy one from an exchange and have them owe you a bitcoin – just as when you put your money in the bank. The bank doesn’t have a stack of banknotes in the vault with your name on it; and if it goes bust you have to stand in line with the other creditors.
It seems that most people who buy bitcoin think that they’re operating under the gold merchant model, while most exchanges operate under the bank model. This raises a whole host of issues around solvency, liquidity, accounting practices, money laundering, risk and trust. The details matter, and the more we look at them, the worse it seems.
This paper will appear at the Workshop on the Economics of Information Security later this month. It contains eight recommendations for what governments should be doing to clean up this mess.
We’re delighted to announce that the new security lectureship we advertised has been offered to Alice Hutchings, and she’s accepted. We had 52 applicants of whom we shortlisted three for interview.
Alice works in the Cambridge Cybercrime Centre and her background is in criminology. Her publications are here. Her appointment will build on our strengths in research on cybercrime, and will complement and extend our multidisciplinary work in the economics and psychology of security.
If you care about children’s rights, data protection or indeed about privacy in general, then I’d suggest you read this disturbing new report on what’s happening in Britain’s schools.
In an ideal world, schools should be actively preparing pupils to be empowered citizens in a digital world that is increasingly riddled with exploitative and coercive systems. Instead, the government is forcing schools to collect data that are then sold or given to firms that exploit it, with no meaningful consent. There is not even the normal right to request subject access to you can check whether the information about you is right and have it corrected if it’s wrong.
Yet the government has happily given the Daily Telegraph fully-identified pupil information so that it can do research, presumably on how private schools are better than government ones, or how grammar schools are better than comprehensives. You just could not make this up.
The detective work to uncover such abuses has been done by the NGO Defenddigitalme, who followed up some work we did a decade and more ago on the National Pupil Database in our Database State report and our earlier research on children’s databases. Defenddigitalme are campaigning for subject access rights, the deletion of nationality data, and a code of practice. Do read the report and if you think it’s outrageous, write to your MP and say so. Our elected representatives make a lot of noise about protecting children; time to call them on it.
On May 29th there will be a lively debate in Cambridge between people from NGOs and GCHQ, academia and Deepmind, the press and the Cabinet Office. Should governments be able to break the encryption on our phones? Are we entitled to any privacy for our health and social care records? And what can be done about fake news? If the Internet’s going to be censored, who do we trust to do it?
The occasion is the 20th birthday of the Foundation for Information Policy Research, which was launched on May 29th 1998 to campaign against what became the Regulation of Investigatory Powers Act. Tony Blair wanted to be able to treat all URLs as traffic data and collect everyone’s browsing history without a warrant; we fought back, and our “big browser” amendment defined traffic data to be only that part of the URL needed to identify the server. That set the boundary. Since then, FIPR has engaged in research and lobbying on export control, censorship, health privacy, electronic voting and much else.
After twenty years it’s time to take stock. It’s remarkable how little the debate has shifted despite everything moving online. The police and spooks still claim they need to break encryption but still can’t support that with real evidence. Health administrators still want to sell our medical records to drug companies without our consent. Governments still can’t get it together to police cybercrime, but want to censor the Internet for all sorts of other reasons. Laws around what can be said or sold online – around copyright, pornography and even election campaign funding – are still tussle spaces, only now the big beasts are Google and Facebook rather than the copyright lobby.
A historical perspective might perhaps be of some value in guiding future debates on policy. If you’d like to join in the discussion, book your free ticket here.
I’m at the world’s first conference on ethics in mathematics and will be speaking in half an hour. Here are my slides. I will be describing the course I teach to second-year computer scientists on Economics, Law and Ethics. Courses on ethics are mandatory for computer scientists while economics is mandatory for engineers; my innovation has been to combine them. My experience is that teaching them together adds real value. We can explain coherently why society needs rules via discussions of game theory, and then of network effects, asymmetric information and other market failures typical of the IT industry; we can then discuss the limitations of law and regulation; and this sets the stage for both principled and practical discussions of ethics.
Mark Zuckerberg tried to blame Cambridge University in his recent testimony before the US Senate, saying “We do need to understand whether there was something bad going on in Cambridge University overall, that will require a stronger action from us.”
The New Scientist invited me to write a rebuttal piece, and here it is.
Dr Kogan tried to get approval to use the data his company had collected from Facebook users in academic research. The psychology ethics committee refused permission, and when he appealed to the University Ethics Committee (declaration: I’m a member) this refusal was upheld. Although he’d got consent from the people who ran his app, the same could not be said of their Facebook “friends” from whom most of the data were collected.
The deceptive behaviour here has been by Facebook, which creates the illusion of privacy in order to get its users to share more data. There has been a lot of work on the economics and psychology of privacy over the past decade and we now understand the dynamics of advertising markets better than we used to.
One big question is the “privacy paradox”. Why do people say they care about privacy, yet behave otherwise? Part of the answer is about context; and part of it is about learning. Over time, more and more people are starting to pay attention to online privacy settings, despite attempts by Facebook and other online advertising firms to keep changing privacy settings to confuse people.
With luck, the Facebook scandal will be a “flashbulb moment” that will drive lots more people to start caring about their privacy online. It will certainly provide interesting new data to privacy researchers.
A new Computerphile video explains how we’ve worked out a much better way to track stolen bitcoin. Previous attempts to do this had got entangled in the problem of dealing with transactions that split bitcoin into change, or that consolidate smaller sums into larger ones, and with mining fees. The answer comes from an unexpected direction: a legal precedent in 1816. We discussed the technical details last week at the Security Protools Workshop; a preprint of our paper is here.
Previous attempts to track tainted coins had used either the “poison” or the “haircut” method. Suppose I open a new address and pay into it three stolen bitcoin followed by seven freshly-mined ones. Then under poison, the output is ten stolen bitcoin, while under haircut it’s ten bitcoin that are marked 30% stolen. After thousands of blocks, poison tainting will blacklist millions of addresses, while with haircut the taint gets diffused, so neither is very effective at tracking stolen property. Bitcoin due-diligence services supplant haircut taint tracking with AI/ML, but the results are still not satisfactory.
We discovered that, back in 1816, the High Court had to tackle this problem in Clayton’s case, which involved the assets and liabilities of a bank that had gone bust. The court ruled that money must be tracked through accounts on the basis of first-in, first out (FIFO); the first penny into an account goes to satisfy the first withdrawal, and so on.
Ilia Shumailov has written software that applies FIFO tainting to the blockchain and the results are impressive, with a massive improvement in precision. What’s more, FIFO taint tracking is lossless, unlike haircut; so in addition to tracking a stolen coin forward to find where it’s gone, you can start with any UTXO and trace it backwards to see its entire ancestry. It’s not just good law; it’s good computer science too.
We plan to make this software public, so that everybody can use it and everybody can see where the bad bitcoins are going.
I’m giving a further talk on Tuesday at a financial-risk conference in Paris.
Making security sustainable is a piece I wrote for Communications of the ACM and has just appeared in the Privacy and security column of their March issue. Now that software is appearing in durable goods, such as cars and medical devices, that can kill us, software engineering will have to come of age.
The notion that software engineers are not responsible for things that go wrong will be laid to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve. And as security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection.
Perhaps the biggest challenge will be durability. At present we have a hard time patching a phone that’s three years old. Yet the average age of a UK car at scrappage is about 14 years, and rising all the time; cars used to last 100,000 miles in the 1980s but now keep going for nearer 200,000. As the embedded carbon cost of a car is about equal to that of the fuel it will burn over its lifetime, we just can’t afford to scrap cars after five years, as do we laptops.
For durable safety-critical goods that incorporate software, the long-term software maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost.
This paper follows on from our earlier work for the European Commission on what happens to safety regulation in the future Internet of Things.