The role of software engineering in electronic elections

Many designs for trustworthy electronic elections use cryptography to assure participants that the result is accurate. However, it is a system’s software engineering that ensures a result is declared at all. Both good software engineering and cryptography are thus necessary, but so far cryptography has drawn more attention. In fact, the software engineering aspects could be just as challenging, because election systems have a number of properties which make them almost a pathological case for robust design, implementation, testing and deployment.

Currently deployed systems are lacking in both software robustness and cryptographic assurance — as evidenced by the English electronic election fiasco. Here, in some cases the result was late and in others the electronic count was abandoned due to system failures resulting from poor software engineering. However, even where a result was returned, the black-box nature of auditless electronic elections brought the accuracy of the count into doubt. In the few cases where cryptography was used it was poorly explained and didn’t help verify the result either.

End-to-end cryptographically assured elections have generated considerable research interest and the resulting systems, such as Punchscan and Prêt à Voter, allow voters to verify the result while maintaining their privacy (provided they understand the maths, that is — the rest of us will have to trust the cryptographers). These systems will permit an erroneous result to be detected after the election, whether caused by maliciousness or more mundane software flaws. However should this occur, or if a result is failed to be returned at all, the election may need to fall back on paper backups or even be re-run — a highly disruptive and expensive failure.

Good software engineering is necessary but, in the case of voting systems, may be especially difficult to achieve. In fact, such systems have more similarities to the software behind rocket launches than more conventional business productivity software. We should thus expect the consequential high costs and, despite all this extra effort, that the occasional catastrophe will be inevitable. The remainder of this post will discuss why I think this is the case, and how manually-counted paper ballots circumvent many of these difficulties.

Continue reading The role of software engineering in electronic elections

Digital signatures hit the road

For about thirty years now, security researchers have been talking about using digital signatures in court. Thousands of academic papers have had punchlines like “the judge then raises X to the power Y, finds it’s equal to Z, and sends Bob to jail”. So far, this has been pleasant speculation.

Now the rubber starts to hit the road. Since 2006 trucks in Europe have been using digital tachographs. Tachographs record a vehicle’s speed history and help enforce restrictions on drivers’ working hours. For many years they have used circular waxed paper charts, which have been accepted in court as evidence just like any other paper record. However, paper charts are now being replaced with smartcards. Each driver has a card that records 28 days of infringement history, protected by digital signatures. So we’ve now got the first widely-deployed system in which digital sigantures are routinely adduced in evidence. The signed records are being produced to support prosecutions for working too long hours, for speeding, for tachograph tampering, and sundry other villainy.

So do magistrates really raise X to the power Y, find it’s equal to Z, and send Eddie off to jail? Not according to enforcement folks I’ve spoken to. Apparently judges find digital signatures too “difficult” as they’re all in hex. The police, always eager to please, have resolved the problem by applying standard procedures for “securing” digital evidence. When they raid a dodgy trucking company, they image the PC’s disk drive and take copies on DVDs that are sealed in evidence bags. One gets given to the defence and one kept for appeal. The paper logs documenting the procedure are available for Their Worships to inspect. Everyone’s happy, and truckers duly get fined.

In fact the trucking companies are very happy. I understand that 20% of British trucks now use digital tachographs, well ahead of expectations. Perhaps this is not uncorrelated with the fact that digital tachographs keep much less detailed data than could be coaxed out of the old paper charts. Just remember, you read it here first.

Recent talks: Chip & PIN, traffic analysis, and voting

In the past couple of months, I’ve presented quite a few talks, and in the course of doing so, travelled a lot too (Belgium and Canada last month; America and Denmark still to come). I’ve now published my slides from these talks, which might also be of interest to Light Blue Touchpaper readers, so I’ll summarize the contents here.

Two of the talks were on Chip & PIN, the UK deployment of EMV. The first presentation — “Chip and Spin” — was for the Girton village Neighbourhood Watch meeting. Girton was hit by a spate of card-cloning, eventually traced back to a local garage, so they invited me to give a fairly non-technical overview of the problem. The slides served mainly as an introduction to a few video clips I showed, taken from TV programmes in which I participated. [slides (PDF 1.1M)]

The second Chip & PIN talk was to the COSIC research group at K.U. Leuven. Due to the different audience, this presentation — “EMV flaws and fixes: vulnerabilities in smart card payment systems” — was much more technical. I summarized the EMV protocol, described a number of weaknesses which leave EMV open to attack, along with corresponding defences. Finally, I discussed the more general problem with EMV — that customers are in a poor position to contest fraudulent transactions — and how this situation can be mitigated. [slides (PDF 1.4M)]

If you are interested in further details, much of the material from both of my Chip & PIN talks is discussed in papers from our group, such as “Chip and SPIN“, “The Man-in-the-Middle Defence” and “Keep Your Enemies Close: Distance bounding against smartcard relay attacks

Next I went to Ottawa for the PET Workshop (now renamed the PET Symposium). Here, I gave three talks. The first was for a panel session — “Ethics in Privacy Research”. Since this was a discussion, the slides aren’t particularly interesting but it will hopefully be the subject of an upcoming paper.

Then I gave a short talk at WOTE, on my experiences as an election observer. I summarized the conclusions of the Open Rights Group report (released the day before my talk) and added a few personal observations. Richard Clayton discussed the report in the previous post. [slides (PDF 195K)]

Finally, I presented the paper written by Piotr Zieliński and me — “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries”, which I previously mentioned in a recent post. In the talk I gave a graphical summary of the paper’s key points, which I hope will aid in understanding the motivation of the paper and the traffic analysis method we developed. [slides (PDF 2.9M)]

"No confidence" in eVoting pilots

Back on May 3rd, Steven Murdoch, Chris Wilson and myself acted as election observers for the Open Rights Group (ORG) and looked at the conduct of the parish, council and mayoral elections in Bedford. Steven and I went back again on the 4th to observe their “eCounting” of the votes. In fact, we were still there on the 5th at half-one in the morning when the final result was declared after over fifteen hours.

Far from producing faster, more accurate, results, the eCounting was slower and left everyone concerned with serious misgivings — and no confidence whatsoever that the results were correct.

Today ORG launches its collated report into all of the various eVoting and eCounting experiments that took place in May — documenting the fiascos that occurred not only in Bedford but also in every other place that ORG observed. Their headline conclusion is “The Open Rights Group cannot express confidence in the results for areas observed” — which is pretty damning.

In Bedford, we noted that prior to the shambles on the 4th of May the politicians and voters we talked to were fairly positive about “e” elections — seeing it as inevitable progress. When things started to go wrong they then changed their minds…

However, there isn’t any “progress” here, and almost everyone technical who has looked at voting systems is concerned about them. The systems don’t work very well, they are inflexible, they are poorly tested and they are badly designed — and then when legitimate doubts are raised as to their integrity there is no way to examine the systems to determine that they’re working as one would hope.

We rather suspect that people are scared of being seen as Luddites if they don’t embrace “new technology” — whereas more technical people, who are more confident of their knowledge, are prepared to assess these systems on their merits, find them sadly lacking, and then speak up without being scared that they’ll be seen as ignorant.

The ORG report should go some way to helping everyone understand a little more about the current, lamentable, state of the art — and, if only just a little common sense is brought to bear, should help kill off e-Elections in the UK for a generation.

Here’s hoping!

Hacking tools are legal for a little longer

It’s well over a year since the Government first brought forward their proposals to make security research illegal crack down on hacking tools.

They revised their proposals a bit — in the face of considerable lobbying about so-called “dual-use” tools. These are programs that might be used by security professionals to check if machines were secure, and by criminals to look for the insecure ones to break into. In fact, most of the tools on a professionals laptop, from nmap through wireshark to perl could be used for both good and bad purposes.

The final wording means that to succesfully prosecute the author of a tool you must show that they intended it to be used to commit computer crime; and intent would also have to be proved for obtaining, adapting, supplying or offering to supply … so most security professionals have nothing to worry about — in theory, in practice of course being accused of wickedness and having to convince a jury that there was no intent would be pretty traumatic!

The most important issue that the Home Office refused to concede was the distribution offence. The offence is to "supply or offer to supply, believing that it is likely to be used to commit, or to assist in the commission of [a Computer Misuse Act s1/s3 offence]". The Home Office claim that “likely” means “more than a 50% chance” (apparently there’s caselaw on what likely means in a statute).

This is of course entirely unsatisfactory — you can run a website for people to download nmap for years without problems, then if one day you look at your weblogs and find that everyone in Ruritania (a well-known Eastern European criminal paradise) is downloading from you, then suddenly you’re committing an offence. Of course, if you didn’t look at your logs then you would not know — and maybe the lack of mens rea will get you off ? (IANAL ! so take advice before trying this at home!)

The hacking tools offences were added to the Computer Misuse Act 1990 (CMA), along with other changes to make it clear that DDoS is illegal, and along with changes to the tariffs on other offences to make them much more serious — and extraditable.

The additions are in the form of amendments that are incorporated in the Police and Justice Act 2006 which received its Royal Assent on the 8th November 2006.

However, the relevant sections, s35–38, are not yet in force! viz: hacking tools are still not illegal and will not be illegal until, probably, April 2008.

Continue reading Hacking tools are legal for a little longer

Phishing, students, and cheating at the lottery

Every so often I set an exam question to which I actually want to know the answer. A few years back, when the National Lottery franchise was up for tender, I asked students how to cheat at the lottery; the answers were both entertaining and instructive. Having a lot of bright youngsters think about a problem under stress for half an hour gives you rapid, massively-parallel requirements engineering.

This year I asked about phishing: here’s the question. When I set it in February, an important question for the banks was whether to combat phishing with two-factor authentication (give customers a handheld password calculator, as Coutts does) or two-channel authentication (send them an SMS when they make a sensitive transaction, saying for example “if you really meant to send $4000 to Latvia, please enter the code 4715 in your browser now”).

At least two large UK banks are planning to go two-factor – despite eight-figure costs, the ease of real-time man-in-the-middle attacks, and other problems described here and here. Some banks have thought of two-channel but took fright at the prospect that customers might find it hard to use and deluge their call centres. So I set phishing as an exam question, inviting candidates to select two protection mechanisms from a list of four.

The overwhelming majority of the 34 students who answered the question chose two-channel as one of their mechanisms. I’ve recently become convinced this is the right answer, because of feedback from early adopter banks overseas who have experienced no significant usability problems. It was interesting to have this insight confirmed by the “wisdom of crowds”; I’d only got the feedback in the last month or so, and had not told the students.

Ross

PS: there’s always some obiter dictum that gives an insight into youth psychology. Here it was the candidate who said the bank should use SSL client certificates plus SMS notification, as that gives you three-factor authentication: something you know (your password), something you have (your SSL cert) and something you are (your phone). So now we know 🙂

Should there be a Best Practice for censorship?

A couple of weeks ago, right at the end of the Oxford Internet Institute conference on The Future of Free Expression on the Internet, the question was raised from the platform as to whether it might be possible to construct a Best Current Practice (BCP) framework for censorship?

If — the argument ran — IF countries were transparent about what they censored, IF there was no overblocking (the literature’s jargon for collateral damage), IF it was done under a formal (local) legal framework, IF there was the right of appeal to correct inadvertent errors, IF … and doubtless a whole raft more of “IFs” that a proper effort to develop a BCP would establish. IF… then perhaps censorship would be OK.

I spoke against the notion of a BCP from the audience at the time, and after some reflection I see no reason to change my mind.

There will be many more subtle arguments — much as there are will be more IFs to consider, but I can immediately see two insurmountable objections.

The first is that a BCP will inevitably lead to far more censorship, but now with the apparent endorsement of a prestigious organisation: “The OpenNet Initiative says that blocking the political opposition’s websites is just fine!” Doubtless some of the IFs in the BCP will address open political processes, and universal human rights … but it will surely come down to quibbling about language: terrorist/freedom-fighter; assassination/murder; dissent/rebellion; opposition/traitor.

The second, and I think the most telling, objection is that it will reinforce the impression that censoring the Internet can actually be achieved! whereas the evidence piles up that it just isn’t possible. All of the schemes for blocking content can be evaded by those with technical knowledge (or access to the tools written by others with that knowledge). Proxies, VPNs, Tor, fragments, ignoring resets… the list of evasion technologies is endless.

One of the best ways of spreading data to multiple sites is to attempt to remove it, and every few years some organisation demonstrates this again. Although ad hoc replication doesn’t necessarily scale — there’s plenty of schemes in the literature for doing it on an industrial scale.

It’s cliched to trot out John Gilmore’s observation that “the Internet treats censorship as a defect and routes around it“, but over-familiarity with the phrase should not hide its underlying truth.

So, in my view, a BCP will merely be used by the wicked as a fig-leaf for their activity, and by the ignorant to prop up their belief that it’s actually possible to block the content they don’t believe should be visible. A BCP is a thoroughly bad idea, and should not be further considered.

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Users of the Tor anonymous communication system are at risk of being tracked by an adversary who can monitor both the traffic entering and leaving the network. This weakness is well known to the designers and currently there is no known practical way to resist such attacks, while maintaining the low-latency demanded by applications such as web browsing. For this reason, it seems intuitively clear that when selecting a path through the Tor network, it would be beneficial to select the nodes to be in different countries. Hopefully government-level adversaries will find it problematic to track cross-border connections as mutual legal assistance is slow, if it even works at all. Non-government adversaries might also find that their influence drops off at national boundaries too.

Implementing secure IP-based geolocation is hard, but even if it were possible, the technique might not help and could perhaps even harm security. The PET Award nominated paper, “Location Diversity in Anonymity Networks“, by Nick Feamster and Roger Dingledine showed that international Internet connections cross a comparatively small number of tier-1 ISPs. Thus, by forcing one or more of these companies to co-operate, a large proportion of connections through an anonymity network could be traced.

The results of Feamster and Dingledine’s paper suggest that it may be better to bounce anonymity traffic around within a country, because it is less likely that there will be a single ISP monitoring incoming and outgoing traffic to several nodes. However, this only appears to be the case because they used BGP data to build a map of Autonomous Systems (ASes), which roughly correspond to ISPs. Actually, inter-ISP traffic (especially in Europe) might travel through an Internet eXchange (IX), a fact not apparent from BGP data. Our paper, “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries“, by Steven J. Murdoch and Piotr Zieliński, examines the consequences of this observation.

Continue reading Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Distance bounding against smartcard relay attacks

Steven Murdoch and I have previously discussed issues concerning the tamper resistance of payment terminals and the susceptibility of Chip & PIN to relay attacks. Basically, the tamper resistance protects the banks but not the customers, who are left to trust any of the devices they provide their card and PIN to (the hundreds of different types of terminals do not help here). The problem some customers face is that when fraud happens, they are the ones being blamed for negligence instead of the banks owning up to a faulty system. Exacerbating the problem is the impossibility of customers to prove they have not been negligent with their secrets without the proper data that the banks have, but refuse to hand out.

Continue reading Distance bounding against smartcard relay attacks

Results of global Internet filtering survey

At their conference in Oxford, the OpenNet Initiative have released the results from their first global Internet filtering survey. This announcement has been widely covered in the media.

Out of the 41 countries surveyed, 25 were found to impose filtering, though the topics blocked and extent of blocking varies dramatically.

Results can be seen on the filtering map and an URL checker. The full report, including detailed country and region summaries, will be published in the book “Access Denied: The Practice and Policy of Global Internet Filtering“.