I’m in the FutureID3 workshop in Jesus College, Cambridge, and will try to liveblog the talks in followups to this post.
On May 29th there will be a lively debate in Cambridge between people from NGOs and GCHQ, academia and Deepmind, the press and the Cabinet Office. Should governments be able to break the encryption on our phones? Are we entitled to any privacy for our health and social care records? And what can be done about fake news? If the Internet’s going to be censored, who do we trust to do it?
The occasion is the 20th birthday of the Foundation for Information Policy Research, which was launched on May 29th 1998 to campaign against what became the Regulation of Investigatory Powers Act. Tony Blair wanted to be able to treat all URLs as traffic data and collect everyone’s browsing history without a warrant; we fought back, and our “big browser” amendment defined traffic data to be only that part of the URL needed to identify the server. That set the boundary. Since then, FIPR has engaged in research and lobbying on export control, censorship, health privacy, electronic voting and much else.
After twenty years it’s time to take stock. It’s remarkable how little the debate has shifted despite everything moving online. The police and spooks still claim they need to break encryption but still can’t support that with real evidence. Health administrators still want to sell our medical records to drug companies without our consent. Governments still can’t get it together to police cybercrime, but want to censor the Internet for all sorts of other reasons. Laws around what can be said or sold online – around copyright, pornography and even election campaign funding – are still tussle spaces, only now the big beasts are Google and Facebook rather than the copyright lobby.
A historical perspective might perhaps be of some value in guiding future debates on policy. If you’d like to join in the discussion, book your free ticket here.
I’m at the twenty-fifth Security Protocols Workshop, of which the theme is protocols with multiple objectives. I’ll try to liveblog the talks in followups to this post.
I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.
Your browser contains a few hundred root certificates. Many of them were put there by governments; two (Verisign and Comodo) are there because so many merchants trust them that they’ve become ‘too big to fail’. This is a bit like where people buy the platform with the most software – a pattern of behaviour that let IBM and then Microsoft dominate our industry in turn. But this is not how trust should work; it leads to many failures, some of them invisible.
What’s missing is a mechanism where trust derives from users, rather than from vendors, merchants or states. After all, the power of a religion stems from the people who believe in it, not from the government. Entities with godlike powers that are foisted on us by others and can work silently against us are not gods, but demons. What can we do to exorcise them?
Do You Believe in Tinker Bell? The Social Externalities of Trust explores how we can crowdsource trust. Tor bridges help censorship victims access the Internet freely, and there are not enough of them. We want to motivate lots of people to provide them, and the best providers are simply those who help the most victims. So trust should flow from the support of the users, and it should be hard for powerful third parties to pervert. Perhaps a useful mascot is Tinker Bell, the fairy in Peter Pan, whose power waxes and wanes with the number of children who believe in her.
I’m at the 23rd Security Protocols Workshop, whose theme this year is is information security in fiction and in fact. Engineering is often inspired by fiction, and vice versa; what might we learn from this?
I will try to liveblog the talks in followups to this post.
In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.
This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?
The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.
It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.
In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.
With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.
The second edition of my book “Security Engineering” came out three weeks ago. Wiley have now got round to sending me the final electronic version of the book, plus permission to put half a dozen of the chapters online. They’re now available for download here.
The chapters I’ve put online cover security psychology, banking systems, physical protection, APIs, search, social networking, elections and terrorism. That’s just a sample of how our field has grown outwards in the seven years since the first edition.
Today, the Electoral Commission released their evaluation reports on the May 2007 e-voting and e-counting pilots held in England. Each of the pilot areas has a report from the Electoral Commission and the e-counting trials are additionally covered by technical reports from Ovum, the Electoral Commission’s consultants. Each of the changes piloted receives its own summary report: electronic counting, electronic voting, advanced voting and signing in polling stations. Finally, there are a set of key findings, both from the Electoral Commission and from Ovum.
Richard Clayton and I acted as election observers for the Bedford e-counting trial, on behalf of the Open Rights Group, and our discussion of the resulting report can be found in an earlier post. I also gave a talk on a few of the key points.
The Commission’s criticism of e-counting and e-voting was scathing; concerning the latter saying that the “security risk involved was significant and unacceptable.” They recommend against further trials until the problems identified are resolved. Quality assurance and planning were found to be inadequate, predominantly stemming from insufficient timescales. In the case of the six e-counting trials, three were abandoned, two were delayed, leaving only one that could be classed as a success. Poor transparency and value for money are also cited as problems. More worryingly, the Commission identify a failure to learn from the lessons of previous pilot programmes.
The reports covering the Bedford trials largely match my personal experience of the count and add some details which were not available to the election observers (in particular, explaining that the reason for some of the system shutdowns was to permit re-configuration of the OCR algorithms, and that due to delays at the printing contractor, no testing with actual ballot papers was performed). One difference is that the Ovum report was more generous than the Commission report regarding the candidate perceptions, saying “Apart from the issue of time, none of the stakeholders questioned the integrity of the system or the results achieved.” This discrepancy could be because the Ovum and Commission representatives left before the midnight call for a recount, by candidates who had lost confidence in the integrity of the results.
There is much more detail to the reports than I have been able to summarise here, so if you are interested in electronic elections, I suggest you read them yourselves.
The Open Rights Group has in general welcomed the Electoral Commission’s report, but feel that the inherent problems resulting from the use of computers in elections have not been fully addressed. The results of the report have also been covered by the media, such as the BBC: “Halt e-voting, says election body” and The Guardian: “Electronic voting not safe, warns election watchdog”.
Many designs for trustworthy electronic elections use cryptography to assure participants that the result is accurate. However, it is a system’s software engineering that ensures a result is declared at all. Both good software engineering and cryptography are thus necessary, but so far cryptography has drawn more attention. In fact, the software engineering aspects could be just as challenging, because election systems have a number of properties which make them almost a pathological case for robust design, implementation, testing and deployment.
Currently deployed systems are lacking in both software robustness and cryptographic assurance — as evidenced by the English electronic election fiasco. Here, in some cases the result was late and in others the electronic count was abandoned due to system failures resulting from poor software engineering. However, even where a result was returned, the black-box nature of auditless electronic elections brought the accuracy of the count into doubt. In the few cases where cryptography was used it was poorly explained and didn’t help verify the result either.
End-to-end cryptographically assured elections have generated considerable research interest and the resulting systems, such as Punchscan and Prêt à Voter, allow voters to verify the result while maintaining their privacy (provided they understand the maths, that is — the rest of us will have to trust the cryptographers). These systems will permit an erroneous result to be detected after the election, whether caused by maliciousness or more mundane software flaws. However should this occur, or if a result is failed to be returned at all, the election may need to fall back on paper backups or even be re-run — a highly disruptive and expensive failure.
Good software engineering is necessary but, in the case of voting systems, may be especially difficult to achieve. In fact, such systems have more similarities to the software behind rocket launches than more conventional business productivity software. We should thus expect the consequential high costs and, despite all this extra effort, that the occasional catastrophe will be inevitable. The remainder of this post will discuss why I think this is the case, and how manually-counted paper ballots circumvent many of these difficulties.