Category Archives: Electronic voting

Three Paper Thursday: Vēnī, Vīdī, Vote-y – Election Security

With the recent quadrennial instantiation of the US presidential election, discussions of election security have predictably resurged across much of the world. Indeed, news cycles in the US, UK, and EU abound with talking points surrounding the security of elections. In light of this context, we will use this week’s Three Paper Thursday to shed light on the technical challenges, solutions, and opportunities in designing secure election systems.

This post will focus on the technical security of election systems. That said, the topic of voter manipulation techniques such as disinformation campaigns, although out of scope here, is also an open area of research.

At first glance, voting may not seem like a challenging problem. If we are to consider a simple majority vote, surely a group of young schoolchildren could reach a consensus in minutes via hand-raising. Striving for more efficient vote tallying, though, perhaps we may opt to follow the IETF in consensus through humming. As we seek a solution that can scale to large numbers of voters, practical limitations will force us to select a multi-location, asynchronous process. Whether we choose in-person polling stations or mail-in voting, challenges quickly develop: how do we know a particular vote was counted, its contents kept secret, and the final tally correct?

National Academies of Sciences, Engineering, and Medicine (U.S.), Ed., Securing the vote: protecting American democracy, The National Academies Press (2018)

The first paper is particularly prominent due to its unified, no-nonsense, and thorough analysis. The report is specific to the United States, but its key themes apply generally. Written in response to accusations of international interference in the US 2016 presidential election, the National Academies provide 41 recommendations to strengthen the US election system.

These recommendations are extremely straightforward, and as such a reminder that adversaries most often penetrate large systems by targeting the “weakest link.” Among other things, the authors recommend creating standardized ballot data formats, regularly validating voter registration lists, evaluating the accessibility of ballot formats, ensuring access to absentee ballots, conducting appropriate audits, and providing adequate funding for elections.

It’s important to get the basics right. While there are many complex, stimulating proposals that utilize cutting-edge algorithms, cryptography, and distributed systems techniques to strengthen elections, many of these proposals are moot if the basic logistics are mishandled.

Some of these low-tech recommendations are, to the surprise of many passionate technologists, quite common among election security specialists. For example, requiring a paper ballot trail and avoiding internet voting based on current technology is also cited in our next paper.

Matthew Bernhard et al., Public Evidence from Secret Ballots, arXiv:1707.08619 (2017)

Governance aside, the second paper offers a comprehensive survey of the key technical challenges in election security and common tools used to solve them. The paper motivates the difficulty of election systems by attesting that all actors involved in an election are mutually distrustful, meaningful election results require evidence, and voters require ballot secrecy.

Ballot secrecy is more than a nicety; it is key to a properly functioning election system. Implemented correctly, ballot secrecy prevents voter coercion. If a voter’s ballot is not secret, or indeed if there is any way a voter can post-facto prove the casting a certain vote, malicious actors may pressure the voter to provide proof that they voted as directed. This can be insidiously difficult to prevent if not considered thoroughly.

Bernhard et al. discuss risk-limiting audits (RLAs) as an efficient yet powerful way to limit uncertainty in election results. By sampling and recounting a subset of votes, RLAs enable the use of statistical methods to increase confidence in a correct ballot count. Employed properly, RLAs can enable the high-probability validation of election tallies with effort inversely proportional to the expected margin. RLAs are now being used in real-world elections, and many RLA techniques exist in practice. 

Refreshingly, this paper establishes that blockchain-based voting is a bad idea. Blockchains inherently lack a central authority, so enforcing election rules would be a challenge. Furthermore, a computationally powerful adversary could control which votes get counted.

The paper also discusses high-level cryptographic tools that can be useful in elections. This leads us to our third and final paper.

Josh Benaloh, ElectionGuard Specification v0.95, Microsoft GitHub (2020)

Our final paper is slightly different from the others in this series; it’s a snapshot of a formal specification that is actively being developed, largely based on the author’s 1996 Yale doctoral thesis.

The specification describes ElectionGuard, a system being built by Microsoft to enable verifiable election results (disclaimer: the author of this post holds a Microsoft affiliation). It uses a combination of exponential ElGamal additively-homomorphic encryption, zero knowledge proofs, and Shamir’s secret sharing to conduct publicly-verifiable, secret-ballot elections.

When a voter casts a ballot, they are given a tracking code which can be used to verify the counting of the ballot’s votes via cryptographic proofs published with the final tally. Voters can achieve high confidence that their ballot represents a proper encryption of their desired votes by optionally spoiling an unlimited number of ballots triggering a decryption of the spoiled ballot at the time of voting. Encrypted ballots are homomorphically tallied in encrypted form by the election authorities, and the number of authorities that participate in tallying must meet the threshold set for the election to protect against malicious authorities.

The specification does not require that the system be used for exclusively internet-based or polling station-based elections; rather it is a framework for users to consume as they wish. Indeed, one of the draws to ElectionGuard is that it does not mandate a specific UI, ballot marking device, or even API. This flexibility allows election authorities to leverage the system in the manner that best fits their jurisdiction. The open source implementation can be found on GitHub.

There are many pieces of voting software available, but ElectionGuard is the new kid on the block that addresses many of the concerns raised in our earlier papers.

Key Themes

Designing secure election systems is difficult.

Often, election systems fall short on the basics; improper voting lists, postage issues, and poorly formatted ballots can disrupt elections as much as some adversaries. Ensuring that the foundational components of an election are handled well currently involves seemingly mundane but important things such as paper ballot trails, chains of custody, and voter ID verification.

High-tech election proposals are not new; indeed key insights into the use of cryptographic techniques in elections were being discussed in the academic literature well over two decades ago. That said, in recent years there has been an ostensibly increased investment in implementing cryptographic election systems, and although there remain many problems to be solved the future in this area looks promising.

Our new “Freedom of Speech” policy

Our beloved Vice-Chancellor proposes a “free speech” policy under which all academics must treat other academics with “respect”. This is no doubt meant well, but the drafting is surprisingly vague and authoritarian for a university where the VC, the senior pro-VC, the HR pro-VC and the Registrary are all lawyers. The bottom line is that in future we might face disciplinary charges and even dismissal for mockery of ideas and individuals with which we disagree.

The policy was slipped out in March, when nobody was paying attention. There was a Discussion in June, at which my colleague Arif Ahmad spelled out the problems.

Vigorous debate is intrinsic to academia and it should be civil, but it is unreasonable to expect people to treat all opposing views with respect. Oxford’s policy spells this out. At the Discussion, Arif pointed out that “respect” must be changed to “tolerance” if we are to uphold the liberal culture that we have not just embraced but developed over several centuries.

At its first meeting this term, the University Council considered these arguments but decided to press ahead anyway. We are therefore calling a ballot on three amendments to the policy. If you’re a senior member of the University we invite you to sign up your support for them on the flysheets. The first amendment changes “respect” to “tolerance”; the second makes it harder to force university societies to disinvite speakers whose remarks may be controversial, and the third restricts the circumstances in which the university itself can ban speakers.

Liberalism is coming under attack from authoritarians of both left and right, yet it is the foundation on which modern academic life is built and our own university has contributed more than any other to its development over the past 811 years. If academics can face discipline for using tactics such as scorn, ridicule and irony to criticise folly, how does that sit with having such alumni as John Maynard Keynes and Charles Darwin, not to mention Bertrand Rusell, Douglas Adams and Salman Rushdie?

Happy Birthday FIPR!

On May 29th there will be a lively debate in Cambridge between people from NGOs and GCHQ, academia and Deepmind, the press and the Cabinet Office. Should governments be able to break the encryption on our phones? Are we entitled to any privacy for our health and social care records? And what can be done about fake news? If the Internet’s going to be censored, who do we trust to do it?

The occasion is the 20th birthday of the Foundation for Information Policy Research, which was launched on May 29th 1998 to campaign against what became the Regulation of Investigatory Powers Act. Tony Blair wanted to be able to treat all URLs as traffic data and collect everyone’s browsing history without a warrant; we fought back, and our “big browser” amendment defined traffic data to be only that part of the URL needed to identify the server. That set the boundary. Since then, FIPR has engaged in research and lobbying on export control, censorship, health privacy, electronic voting and much else.

After twenty years it’s time to take stock. It’s remarkable how little the debate has shifted despite everything moving online. The police and spooks still claim they need to break encryption but still can’t support that with real evidence. Health administrators still want to sell our medical records to drug companies without our consent. Governments still can’t get it together to police cybercrime, but want to censor the Internet for all sorts of other reasons. Laws around what can be said or sold online – around copyright, pornography and even election campaign funding – are still tussle spaces, only now the big beasts are Google and Facebook rather than the copyright lobby.

A historical perspective might perhaps be of some value in guiding future debates on policy. If you’d like to join in the discussion, book your free ticket here.

Security Protocols 2016

I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.

Can we crowdsource trust?

Your browser contains a few hundred root certificates. Many of them were put there by governments; two (Verisign and Comodo) are there because so many merchants trust them that they’ve become ‘too big to fail’. This is a bit like where people buy the platform with the most software – a pattern of behaviour that let IBM and then Microsoft dominate our industry in turn. But this is not how trust should work; it leads to many failures, some of them invisible.

What’s missing is a mechanism where trust derives from users, rather than from vendors, merchants or states. After all, the power of a religion stems from the people who believe in it, not from the government. Entities with godlike powers that are foisted on us by others and can work silently against us are not gods, but demons. What can we do to exorcise them?

Do You Believe in Tinker Bell? The Social Externalities of Trust explores how we can crowdsource trust. Tor bridges help censorship victims access the Internet freely, and there are not enough of them. We want to motivate lots of people to provide them, and the best providers are simply those who help the most victims. So trust should flow from the support of the users, and it should be hard for powerful third parties to pervert. Perhaps a useful mascot is Tinker Bell, the fairy in Peter Pan, whose power waxes and wanes with the number of children who believe in her.

Security Protocols 2015

I’m at the 23rd Security Protocols Workshop, whose theme this year is is information security in fiction and in fact. Engineering is often inspired by fiction, and vice versa; what might we learn from this?

I will try to liveblog the talks in followups to this post.

How to vote anonymously under ubiquitous surveillance

In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.

This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?

The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.

It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.

In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.

With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.

A preprint of the paper is available here, and the slides here.

Second edition

The second edition of my book “Security Engineering” came out three weeks ago. Wiley have now got round to sending me the final electronic version of the book, plus permission to put half a dozen of the chapters online. They’re now available for download here.

The chapters I’ve put online cover security psychology, banking systems, physical protection, APIs, search, social networking, elections and terrorism. That’s just a sample of how our field has grown outwards in the seven years since the first edition.

Enjoy!