How Certification Systems Fail: Lessons from the Ware Report

Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.

There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.

Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.

"Security Engineering" now available free online

I’m delighted to announce that my book Security Engineering – A Guide to Building Dependable Distributed Systems is now available free online in its entirety. You may download any or all of the chapters from the book’s web page.

I’ve long been an advocate of open science and open publishing; all my scientific papers go online and I no longer even referee for publications that sit behind a paywall. But some people think books are different. I don’t agree.

The first edition of my book was also put online four years after publication by agreement with the publishers. That took some argument but we found that sales actually increased; for serious books, free online copies and paid-for paper copies can be complements, not substitutes. We are all grateful to authors like David MacKay for pioneering this. So when I wrote the second edition I agreed with Wiley that we’d treat it the same way, and here it is. Enjoy!

Hard questions about quantum crypto and quantum computing

We’ve been assured for 29 years that quantum crypto is secure, and for 19 years that quantum computing is set to make public-key cryptography obsolete. Yet despite immense research funding, attempts to build a quantum computer that scales beyond a few qubits have failed. What’s going on?

In a new paper Why quantum computing is hard – and quantum cryptography is not provably secure, Robert Brady and I try to analyse what’s going on. We argue that quantum entanglement may be modelled by coupled oscillators (as it already is in the study of Josephson junctions) and this could explain why it’s hard to get more than about three qubits. A companion paper of Robert’s on The irrotational motion of a compressible inviscid fluid presents a soliton model of the electron which shows for the first time how spin-1/2 symmetry, and the Dirac equation, can emerge in a completely classical system. There has been a growing amount of work recently on classical models of quantum behaviour; see for example Yves Couder’s beautiful experiments.

The soliton model challenges the Bell tests which purport to show that the wavefunctions of entangled particles are nonlocal. It also challenges the assumption that the physical state of a quantum system is entirely captured by its wavefunction &#936. It follows that local hidden-variable theories of quantum mechanics are not excluded by the Bell tests, and that in consequence we do not have to believe the security proofs offered for EPR-based quantum cryptography. We gave a talk on this at the theoretical physics seminar at Warwick on January 31st; here are the slides and here’s the video, parts 1, 2, 3, 4 and 5.

CACM: A decade of OS access-control extensibility

Operating-system access control technology has undergone a remarkable transformation over the last fifteen years as appliance, embedded, and mobile device vendors transitioned from dedicated “embedded operating systems” to general-purpose ones — often based on open-source UNIX and Linux variants. Device vendors look to upstream operating system authors to provide the critical low-level software foundations for their products: network stacks, UI frameworks, application frameworks, etc. Increasingly, those expectations include security functionality — initially, features to prevent device bricking, but also to constrain potentially malicious code from third-party applications, which engages features from digital signatures to access control and sandboxing.

In a February 2013 Communications of the ACM article, A decade of OS access-control extensibility, I reflect on the central role of kernel access-control extensibility frameworks in supporting security localisation, the adaptation of operating-system security models to site-local or product-specific requirements. Similar to device driver stacks of the virtual file system (VFS), the goal is to allow third-party developers or integrators to extend base operating system security models without being exposed to unstable programming interfaces or the risks associated with less integrated techniques such as system-call interposition.

Case in point is the TrustedBSD MAC Framework, developed and deployed over the 2000s with support from DARPA and the US Navy, in collaboration with several industrial partners. In the article, I consider our original motivations, context, and design principles, but also track the transition process, which relied heavily on open source methodology and community, to a number of widely used products, including the open-source FreeBSD operating system, Apple’s Mac OS X and iOS operating systems, Juniper’s Junos router operating system, and nCircle’s IP360 product. I draw conclusions on things we got right (common infrastructure spanning models; tight integration with OS concurrency model) and wrong (omitting OS privilege model extension; not providing an application author identity model).

Throughout, the diversity of approaches and models suggests an argument for domain-specific policy models that respond to local tradeoffs between performance, functionality, complexity, and security, rather than a single policy model to rule them all. I also emphasise the importance of planning for long-term sustainability for research products — critical to adoption, especially via open source, but also frequently overlooked in academic research.

An open-access (and slightly extended) version of the article can be found on ACM Queue.

Dear ICO: disclose Sony's hash algorithm!

Today the UK Information Commissioner’s Office levied a record £250k fine against Sony over their 2011 Playstation Network breach in which 77 million passwords were stolen. Sony stated that they hashed the passwords, but provided no details. I was hoping that investigators would reveal what hash algorithm Sony used, and in particular if they salted and iterated the hash. Unfortunately, the ICO’s report failed to provide any such details:

The Commissioner is aware that the data controller made some efforts to protect account passwords, however the data controller failed to ensure that the Network Platform service provider kept up with technical developments. Therefore the means used would not, at the time of the attack, be deemed appropriate, given the technical resources available to the data controller.

Given how often I see password implementations use a single iteration of MD5 with no salt, I’d consider that to be the most likely interpretation. It’s inexcusable though for a 12-page report written at public expense to omit such basic technical details. As I said at the time of the Sony Breach, it’s important to update breach notification laws to require that password hashing details be disclosed in full. It makes a difference for users affected by the breach, and it might help motivate companies to get these basic security mechanics right.

Moore's Law won't kill passwords

Computers are getting exponentially faster, yet the human brain is constant! Surely password crackers will eventually beat human memory…

I’ve heard this fallacy repeated enough times, usually soon after the latest advance in hardware for password cracking hits the news, that I’d like to definitively debunk it. Password cracking is certainly getting faster. In my thesis I charted 20 years of password cracking improvements and found an increase of about 1,000 in the number of guesses per second per unit cost that could be achieved, almost exactly a Moore’s Law-style doubling every two years. The good news though is that password hash functions can (and should) co-evolve to get proportionately costlier to evaluate over time. This is a classic arms race and keeping pace simply requires regularly increasing the number of iterations in a password hash. We can even improve against password cracking over time using memory-bound functions, because memory speeds aren’t increasing nearly as quickly and are harder to parallellise. The scrypt() key derivation function is a good implementation of a memory-bound password hash and every high security application should be using it or something similar.

The downside of this arms race is that password hashing will never get any cheaper to deploy (even in inflation-adjusted terms). Hashing a password must be as slow and costly in real terms 20 years from now or else security will be lower. Moore’s Law will never reduce the expense of running an authentication system because security depends on this expense. It also needs to be a non-negligible expense. Achieving any real security requires that password verification take on the order of hundreds of milliseconds or even whole seconds. Unfortunately this hasn’t been the experience of the past 20 years. MD5 was launched over 20 years ago and is still the most common implementation I see in the wild, though it’s gone from being relatively expensive to evaluate to extremely cheap. Moore’s Law has indeed broken MD5 as a password hash and no serious application should still use it. Human memory isn’t more of a problem today than it used to be though. The problem is that we’ve chosen to let password verification become too cheap.

Privacy considered harmful?

The government has once again returned to the vision of giving each of us an electronic health record shared throughout the NHS. This is about the fourth time in twenty years yet its ferocity has taken doctors by surprise.

Seventeen years ago, I was advising the BMA on safety and privacy, and we explained patiently why this was a bad idea. The next government went ahead anyway, which led predictably to the disaster of NPfIT. Nonetheless enough central systems were got working to seriously undermine privacy. Colleagues and I wrote the Database State report on the dangers of such systems; its was adopted as Lib Dem policy and aspects were adopted by the Conservatives too. That did lead to the abandonment of the ContactPoint children’s database but there was a rapid u-turn on health privacy after the election.

The big pharma lobbyists got their way after they got health IT lobbyist Tim Kelsey appointed as Cameron’s privacy tsar and it’s all been downhill from there. The minister says we have an opt-out; but no-one seems to have told him that under GPs will in future be compelled to upload a lot of information about us through a system called GPES if they want to be paid (they had an opt-out but it’s being withdrawn from April). And you can’t even register under a false name any more unless you use a stolen passport.

Yet more banking industry censorship

Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.

Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.

But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.

If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).

Interviews on the clean-slate design argument

Over the past two years, Peter G. Neumann and I, along with a host of collaborators at SRI International and the University of Cambridge Computer Laboratory, have been pursuing CTSRD, a joint computer-security research project exploring fundamental revisions to CPU design, operating systems, and application program structure. Recently we’ve been talking about the social, economic, and technical context for that work in a series of media interviews, including one with ACM Queue on research into the hardware-software interface posted previously.

A key aspect to our argument is that the computer industry has been pursuing a strategy of hill climbing with respect to security; if we were willing to take a step back and revisit some of our more fundamental design choices, learning from longer-term security research over the last forty years, then we might be able to break aspects of the asymmetry driving the current arms race between attackers and defenders. This clean-slate argument doesn’t mean we need to throw everything away, but does suggest that more radical change is required than is being widely considered, as we explore in two further interviews:

Identifying file sharers — the US approach

Last Friday’s successful appeal in the Golden Eye case will mean that significantly more UK-based broadband users will shortly be receiving letters that say that they appear to have been participating in file sharing activity of pornographic films. Recipients of these letters could do worse than to start by consulting this guide as to what to do next.

Although I acted as an expert witness in the original hearing, I was not involved in the appeal since. It was not concerned with technical matters, but was deciding whether Golden Eye could pursue claims for damages on behalf of third party copyright holders (the court says that they may now do so).

Subsequent to the original hearing, I assisted Consumer Focus by producing an expert report on how evidence in file sharing cases should be collected and processed. I wrote about this here in July.

In September, at the request of Consumer Focus, I attended a presentation given by Ms Marianne Grant, Senior Vice President of the Motion Picture Association of America (MPAA) in which she outlined the way in which rights holders in the United States were proposing to monitor unauthorised file sharing of copyright material.

I had a number of concerns about these proposals and I wrote to Consumer Focus to set these out. I have now noted (somewhat belatedly, hence this holiday season blog post) that Consumer Focus have made this letter available online, along with their own letter to the MPAA.

So 2013 looks like being “interesting times” for Internet traceabity — with letters going out in bulk to UK consumer from Golden Eye, and the US “six strikes” process forecast to roll out early next year (albeit it’s been forecast to start in November 2012, July 2012 and many dates before that, so we shall see).