USENIX WOOT07, Exploiting Concurrency Vulnerabilities in System Call Wrappers, and the Evil Genius

I’ve spent the day at the First USENIX Workshop on Offensive Technologies (WOOT07) — an interesting new workshop on attack strategies and technologies. The workshop highlights the tension between the “white” and “black” hats in security research — you can’t design systems to avoid security problems if you don’t understand what they are. USENIX‘s take on such a forum is less far down the questionable ethical spectrum than some other venues, but it certainly presented and talked about both new exploits for new vulnerabilities, and techniques for evading current protections in concrete detail.

I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. I demonstrated sample exploit code against the Sysjail policy on Systrace, and IDwrappers on GSWTK, but the paper includes a more extensive discussion including vulnerabilities in sudo‘s Systrace monitor mode. You can read the paper and see the presentation slides here. All affected vendors received at least six months, and in some cases many years advance notice regarding these vulnerabilities.

The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing. Systems like the TrustedBSD MAC Framework on FreeBSD and Mac OS X Leopard, Linux Security Modules (LSM), Apple’s (and now also NetBSD’s) kauth(9), and other tightly integrated kernel security frameworks offer specific solutions to these concurrency problems. There’s plenty more to be done in that area.

Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!

Some of the other presentations have included talks about Google’s software fuzzing tool Flayer based on Valgrind, attacks on deployed SIP systems including AT&T’s product, Bluetooth sniffing with BlueSniff, and quantitative analyses of OS fingerprinting techniques. USENIX members will presumably be able to read the full set of papers online immediately; for others, check back in a year or visit the personal web sites of the speakers after you look at the WOOT07 Programme.

Electoral Commission releases e-voting and e-counting reports

Today, the Electoral Commission released their evaluation reports on the May 2007 e-voting and e-counting pilots held in England. Each of the pilot areas has a report from the Electoral Commission and the e-counting trials are additionally covered by technical reports from Ovum, the Electoral Commission’s consultants. Each of the changes piloted receives its own summary report: electronic counting, electronic voting, advanced voting and signing in polling stations. Finally, there are a set of key findings, both from the Electoral Commission and from Ovum.

Richard Clayton and I acted as election observers for the Bedford e-counting trial, on behalf of the Open Rights Group, and our discussion of the resulting report can be found in an earlier post. I also gave a talk on a few of the key points.

The Commission’s criticism of e-counting and e-voting was scathing; concerning the latter saying that the “security risk involved was significant and unacceptable.” They recommend against further trials until the problems identified are resolved. Quality assurance and planning were found to be inadequate, predominantly stemming from insufficient timescales. In the case of the six e-counting trials, three were abandoned, two were delayed, leaving only one that could be classed as a success. Poor transparency and value for money are also cited as problems. More worryingly, the Commission identify a failure to learn from the lessons of previous pilot programmes.

The reports covering the Bedford trials largely match my personal experience of the count and add some details which were not available to the election observers (in particular, explaining that the reason for some of the system shutdowns was to permit re-configuration of the OCR algorithms, and that due to delays at the printing contractor, no testing with actual ballot papers was performed). One difference is that the Ovum report was more generous than the Commission report regarding the candidate perceptions, saying “Apart from the issue of time, none of the stakeholders questioned the integrity of the system or the results achieved.” This discrepancy could be because the Ovum and Commission representatives left before the midnight call for a recount, by candidates who had lost confidence in the integrity of the results.

There is much more detail to the reports than I have been able to summarise here, so if you are interested in electronic elections, I suggest you read them yourselves.

The Open Rights Group has in general welcomed the Electoral Commission’s report, but feel that the inherent problems resulting from the use of computers in elections have not been fully addressed. The results of the report have also been covered by the media, such as the BBC: “Halt e-voting, says election body” and The Guardian: “Electronic voting not safe, warns election watchdog”.

Economics of Tor performance

Currently the performance of the Tor anonymity network is quite poor. This problem is frequently stated as a reason for people not using anonymizing proxies, so improving performance is a high priority of their developers. There are only about 1 000 Tor nodes and many are on slow Internet connections so in aggregate there is about 1 Gbit/s shared between 100 000 or so users. One way to improve the experience of Tor users is to increase the number of Tor nodes (especially high-bandwidth ones). Some means to achieve this goal are discussed in Challenges in Deploying Low-Latency Anonymity, but here I want to explore what will happen when Tor’s total bandwidth increases.

If Tor’s bandwidth doubled tomorrow, the naïve hypothesis is that users would experience twice the throughput. Unfortunately this is not true, because it assumes that the number of users does not vary with bandwidth available. In fact, as the supply of the Tor network’s bandwidth increases, there will be a corresponding increase in the demand for bandwidth from Tor users. This fact will apply just as well for other networks, but for the purposes of this post, I’ll use Tor as an example. Simple economics shows that performance of Tor is controlled by how the number of users scales with available bandwidth, which can be represented by a demand curve.

I don’t claim this is a new insight; in fact between me starting this draft and now, Andreas Pfitzmann made a very similar observation while answering a question following the presentation of Performance Comparison of Low-Latency Anonymisation Services from a User Perspective at the PET Symposium. He said, as I recall, that the performance of the anonymity network is the slowest tolerable speed for people who care about their privacy. Despite this, I couldn’t find anyone who had written a succinct description anywhere, perhaps because it is too obvious. Equally, I have heard the naïve version stated occasionally, so I think it’s helpful to publish something people can point at. The rest of this post will discuss the consequences of modelling Tor user behaviour in this way, and the limitations of the technique.

Continue reading Economics of Tor performance

The role of software engineering in electronic elections

Many designs for trustworthy electronic elections use cryptography to assure participants that the result is accurate. However, it is a system’s software engineering that ensures a result is declared at all. Both good software engineering and cryptography are thus necessary, but so far cryptography has drawn more attention. In fact, the software engineering aspects could be just as challenging, because election systems have a number of properties which make them almost a pathological case for robust design, implementation, testing and deployment.

Currently deployed systems are lacking in both software robustness and cryptographic assurance — as evidenced by the English electronic election fiasco. Here, in some cases the result was late and in others the electronic count was abandoned due to system failures resulting from poor software engineering. However, even where a result was returned, the black-box nature of auditless electronic elections brought the accuracy of the count into doubt. In the few cases where cryptography was used it was poorly explained and didn’t help verify the result either.

End-to-end cryptographically assured elections have generated considerable research interest and the resulting systems, such as Punchscan and Prêt à Voter, allow voters to verify the result while maintaining their privacy (provided they understand the maths, that is — the rest of us will have to trust the cryptographers). These systems will permit an erroneous result to be detected after the election, whether caused by maliciousness or more mundane software flaws. However should this occur, or if a result is failed to be returned at all, the election may need to fall back on paper backups or even be re-run — a highly disruptive and expensive failure.

Good software engineering is necessary but, in the case of voting systems, may be especially difficult to achieve. In fact, such systems have more similarities to the software behind rocket launches than more conventional business productivity software. We should thus expect the consequential high costs and, despite all this extra effort, that the occasional catastrophe will be inevitable. The remainder of this post will discuss why I think this is the case, and how manually-counted paper ballots circumvent many of these difficulties.

Continue reading The role of software engineering in electronic elections

Digital signatures hit the road

For about thirty years now, security researchers have been talking about using digital signatures in court. Thousands of academic papers have had punchlines like “the judge then raises X to the power Y, finds it’s equal to Z, and sends Bob to jail”. So far, this has been pleasant speculation.

Now the rubber starts to hit the road. Since 2006 trucks in Europe have been using digital tachographs. Tachographs record a vehicle’s speed history and help enforce restrictions on drivers’ working hours. For many years they have used circular waxed paper charts, which have been accepted in court as evidence just like any other paper record. However, paper charts are now being replaced with smartcards. Each driver has a card that records 28 days of infringement history, protected by digital signatures. So we’ve now got the first widely-deployed system in which digital sigantures are routinely adduced in evidence. The signed records are being produced to support prosecutions for working too long hours, for speeding, for tachograph tampering, and sundry other villainy.

So do magistrates really raise X to the power Y, find it’s equal to Z, and send Eddie off to jail? Not according to enforcement folks I’ve spoken to. Apparently judges find digital signatures too “difficult” as they’re all in hex. The police, always eager to please, have resolved the problem by applying standard procedures for “securing” digital evidence. When they raid a dodgy trucking company, they image the PC’s disk drive and take copies on DVDs that are sealed in evidence bags. One gets given to the defence and one kept for appeal. The paper logs documenting the procedure are available for Their Worships to inspect. Everyone’s happy, and truckers duly get fined.

In fact the trucking companies are very happy. I understand that 20% of British trucks now use digital tachographs, well ahead of expectations. Perhaps this is not uncorrelated with the fact that digital tachographs keep much less detailed data than could be coaxed out of the old paper charts. Just remember, you read it here first.

Recent talks: Chip & PIN, traffic analysis, and voting

In the past couple of months, I’ve presented quite a few talks, and in the course of doing so, travelled a lot too (Belgium and Canada last month; America and Denmark still to come). I’ve now published my slides from these talks, which might also be of interest to Light Blue Touchpaper readers, so I’ll summarize the contents here.

Two of the talks were on Chip & PIN, the UK deployment of EMV. The first presentation — “Chip and Spin” — was for the Girton village Neighbourhood Watch meeting. Girton was hit by a spate of card-cloning, eventually traced back to a local garage, so they invited me to give a fairly non-technical overview of the problem. The slides served mainly as an introduction to a few video clips I showed, taken from TV programmes in which I participated. [slides (PDF 1.1M)]

The second Chip & PIN talk was to the COSIC research group at K.U. Leuven. Due to the different audience, this presentation — “EMV flaws and fixes: vulnerabilities in smart card payment systems” — was much more technical. I summarized the EMV protocol, described a number of weaknesses which leave EMV open to attack, along with corresponding defences. Finally, I discussed the more general problem with EMV — that customers are in a poor position to contest fraudulent transactions — and how this situation can be mitigated. [slides (PDF 1.4M)]

If you are interested in further details, much of the material from both of my Chip & PIN talks is discussed in papers from our group, such as “Chip and SPIN“, “The Man-in-the-Middle Defence” and “Keep Your Enemies Close: Distance bounding against smartcard relay attacks

Next I went to Ottawa for the PET Workshop (now renamed the PET Symposium). Here, I gave three talks. The first was for a panel session — “Ethics in Privacy Research”. Since this was a discussion, the slides aren’t particularly interesting but it will hopefully be the subject of an upcoming paper.

Then I gave a short talk at WOTE, on my experiences as an election observer. I summarized the conclusions of the Open Rights Group report (released the day before my talk) and added a few personal observations. Richard Clayton discussed the report in the previous post. [slides (PDF 195K)]

Finally, I presented the paper written by Piotr Zieliński and me — “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries”, which I previously mentioned in a recent post. In the talk I gave a graphical summary of the paper’s key points, which I hope will aid in understanding the motivation of the paper and the traffic analysis method we developed. [slides (PDF 2.9M)]

"No confidence" in eVoting pilots

Back on May 3rd, Steven Murdoch, Chris Wilson and myself acted as election observers for the Open Rights Group (ORG) and looked at the conduct of the parish, council and mayoral elections in Bedford. Steven and I went back again on the 4th to observe their “eCounting” of the votes. In fact, we were still there on the 5th at half-one in the morning when the final result was declared after over fifteen hours.

Far from producing faster, more accurate, results, the eCounting was slower and left everyone concerned with serious misgivings — and no confidence whatsoever that the results were correct.

Today ORG launches its collated report into all of the various eVoting and eCounting experiments that took place in May — documenting the fiascos that occurred not only in Bedford but also in every other place that ORG observed. Their headline conclusion is “The Open Rights Group cannot express confidence in the results for areas observed” — which is pretty damning.

In Bedford, we noted that prior to the shambles on the 4th of May the politicians and voters we talked to were fairly positive about “e” elections — seeing it as inevitable progress. When things started to go wrong they then changed their minds…

However, there isn’t any “progress” here, and almost everyone technical who has looked at voting systems is concerned about them. The systems don’t work very well, they are inflexible, they are poorly tested and they are badly designed — and then when legitimate doubts are raised as to their integrity there is no way to examine the systems to determine that they’re working as one would hope.

We rather suspect that people are scared of being seen as Luddites if they don’t embrace “new technology” — whereas more technical people, who are more confident of their knowledge, are prepared to assess these systems on their merits, find them sadly lacking, and then speak up without being scared that they’ll be seen as ignorant.

The ORG report should go some way to helping everyone understand a little more about the current, lamentable, state of the art — and, if only just a little common sense is brought to bear, should help kill off e-Elections in the UK for a generation.

Here’s hoping!

Hacking tools are legal for a little longer

It’s well over a year since the Government first brought forward their proposals to make security research illegal crack down on hacking tools.

They revised their proposals a bit — in the face of considerable lobbying about so-called “dual-use” tools. These are programs that might be used by security professionals to check if machines were secure, and by criminals to look for the insecure ones to break into. In fact, most of the tools on a professionals laptop, from nmap through wireshark to perl could be used for both good and bad purposes.

The final wording means that to succesfully prosecute the author of a tool you must show that they intended it to be used to commit computer crime; and intent would also have to be proved for obtaining, adapting, supplying or offering to supply … so most security professionals have nothing to worry about — in theory, in practice of course being accused of wickedness and having to convince a jury that there was no intent would be pretty traumatic!

The most important issue that the Home Office refused to concede was the distribution offence. The offence is to "supply or offer to supply, believing that it is likely to be used to commit, or to assist in the commission of [a Computer Misuse Act s1/s3 offence]". The Home Office claim that “likely” means “more than a 50% chance” (apparently there’s caselaw on what likely means in a statute).

This is of course entirely unsatisfactory — you can run a website for people to download nmap for years without problems, then if one day you look at your weblogs and find that everyone in Ruritania (a well-known Eastern European criminal paradise) is downloading from you, then suddenly you’re committing an offence. Of course, if you didn’t look at your logs then you would not know — and maybe the lack of mens rea will get you off ? (IANAL ! so take advice before trying this at home!)

The hacking tools offences were added to the Computer Misuse Act 1990 (CMA), along with other changes to make it clear that DDoS is illegal, and along with changes to the tariffs on other offences to make them much more serious — and extraditable.

The additions are in the form of amendments that are incorporated in the Police and Justice Act 2006 which received its Royal Assent on the 8th November 2006.

However, the relevant sections, s35–38, are not yet in force! viz: hacking tools are still not illegal and will not be illegal until, probably, April 2008.

Continue reading Hacking tools are legal for a little longer

Phishing, students, and cheating at the lottery

Every so often I set an exam question to which I actually want to know the answer. A few years back, when the National Lottery franchise was up for tender, I asked students how to cheat at the lottery; the answers were both entertaining and instructive. Having a lot of bright youngsters think about a problem under stress for half an hour gives you rapid, massively-parallel requirements engineering.

This year I asked about phishing: here’s the question. When I set it in February, an important question for the banks was whether to combat phishing with two-factor authentication (give customers a handheld password calculator, as Coutts does) or two-channel authentication (send them an SMS when they make a sensitive transaction, saying for example “if you really meant to send $4000 to Latvia, please enter the code 4715 in your browser now”).

At least two large UK banks are planning to go two-factor – despite eight-figure costs, the ease of real-time man-in-the-middle attacks, and other problems described here and here. Some banks have thought of two-channel but took fright at the prospect that customers might find it hard to use and deluge their call centres. So I set phishing as an exam question, inviting candidates to select two protection mechanisms from a list of four.

The overwhelming majority of the 34 students who answered the question chose two-channel as one of their mechanisms. I’ve recently become convinced this is the right answer, because of feedback from early adopter banks overseas who have experienced no significant usability problems. It was interesting to have this insight confirmed by the “wisdom of crowds”; I’d only got the feedback in the last month or so, and had not told the students.

Ross

PS: there’s always some obiter dictum that gives an insight into youth psychology. Here it was the candidate who said the bank should use SSL client certificates plus SMS notification, as that gives you three-factor authentication: something you know (your password), something you have (your SSL cert) and something you are (your phone). So now we know 🙂

Should there be a Best Practice for censorship?

A couple of weeks ago, right at the end of the Oxford Internet Institute conference on The Future of Free Expression on the Internet, the question was raised from the platform as to whether it might be possible to construct a Best Current Practice (BCP) framework for censorship?

If — the argument ran — IF countries were transparent about what they censored, IF there was no overblocking (the literature’s jargon for collateral damage), IF it was done under a formal (local) legal framework, IF there was the right of appeal to correct inadvertent errors, IF … and doubtless a whole raft more of “IFs” that a proper effort to develop a BCP would establish. IF… then perhaps censorship would be OK.

I spoke against the notion of a BCP from the audience at the time, and after some reflection I see no reason to change my mind.

There will be many more subtle arguments — much as there are will be more IFs to consider, but I can immediately see two insurmountable objections.

The first is that a BCP will inevitably lead to far more censorship, but now with the apparent endorsement of a prestigious organisation: “The OpenNet Initiative says that blocking the political opposition’s websites is just fine!” Doubtless some of the IFs in the BCP will address open political processes, and universal human rights … but it will surely come down to quibbling about language: terrorist/freedom-fighter; assassination/murder; dissent/rebellion; opposition/traitor.

The second, and I think the most telling, objection is that it will reinforce the impression that censoring the Internet can actually be achieved! whereas the evidence piles up that it just isn’t possible. All of the schemes for blocking content can be evaded by those with technical knowledge (or access to the tools written by others with that knowledge). Proxies, VPNs, Tor, fragments, ignoring resets… the list of evasion technologies is endless.

One of the best ways of spreading data to multiple sites is to attempt to remove it, and every few years some organisation demonstrates this again. Although ad hoc replication doesn’t necessarily scale — there’s plenty of schemes in the literature for doing it on an industrial scale.

It’s cliched to trot out John Gilmore’s observation that “the Internet treats censorship as a defect and routes around it“, but over-familiarity with the phrase should not hide its underlying truth.

So, in my view, a BCP will merely be used by the wicked as a fig-leaf for their activity, and by the ignorant to prop up their belief that it’s actually possible to block the content they don’t believe should be visible. A BCP is a thoroughly bad idea, and should not be further considered.