Curfew tags – the gory details

In previous posts I told the story of how Britain’s curfew tagging system can fail. Some prisoners are released early provided they wear a tag to enforce a curfew, which typically means that they have to stay home from 7pm to 7am; some petty offenders get a curfew instead of a prison sentence; and some people accused of serious crimes are tagged while on bail. In dozens of cases, curfewees had been accused of tampering with their tags, but had denied doing so. In a series of these cases, colleagues and I were engaged as experts, but when we demanded tags for testing, the prosecution was withdrawn and the case collapsed. In the most famous case, three men accused of terrorist offences were released; although one has since absconded, the other two are now free in the UK.

This year, a case finally came to trial. Our client, to whom we must refer simply as “Special Z”, was accused of tag tampering, which he denied vigorously. I was instructed as an expert along with my colleague Dr James Dean of Materials Science. Here is my expert report, together with James’s report and addendum, as well as a video of a tag being removed using much less than the amount of force required by the system specification.

The judge was not ready to set a precedent that could have thrown the UK tagging system into chaos. However, I understand our client has now been released on other grounds. Although the court did order us to hand back all the tags, and fragments of broken tags, so as to protect G4S’s intellectual property, it did not make a secrecy order on our expert reports. We publish them here in the hope that they might provide useful guidance to defendants in similar cases in the future, and to policymakers when tagging contracts come up for renewal, whether in the UK or overseas.

On the measurement of banking fraud

Kidnapping is not an easy crime to be successful at…

… it is of course easy to grab the heiress from outside the nightclub at 3am. It’s easy to incarcerate her at the remote farmhouse. If you pick the right henchmen then it’s easy to cut off her ear and post it off to the frantic family.

Thereafter it gets very difficult — you must communicate directly several times and you must physically go and pick up the bag of money. These last two tasks are extremely difficult to manage successfully which is why police forces solve kidnap cases so often (in its first 5 years the Metropolitan Police Kidnap Unit solved 100% of their cases).

Theft from online bank accounts also has its difficulties. It remains relatively easy to gain access to a victim’s bank account and to issue instructions on their behalf. Last decade this was all about “phishing” — gathering credentials by creating fake websites; more recently credentials have been compromised by means of “man-in-the-browser” malware: you think you are paying your gas bill and that’s what your browser tells you is occurring. In practice you’re approving a money transfer to a criminal.

However, moving the money to another account does not mean that the criminal has got away with it. If the bank notices a suspicious pattern of transfers then they can investigate, and when they see the tell-tale signs of fraud then the transfers (which were only changes to computer records) can be trivially reversed. It is only when the criminal can extract folding money from an ATM, or can move the money abroad in such a way that it will never be repatriated that they have been truly successful. So like kidnap, theft from bank accounts is somewhat harder to pull off than one might initially think.

This has turned out to be a surprise to the Treasury Select Committee.

Last month I was asked to give oral evidence to them and the very first question related to how much fraud there was relating to online banking. I explained that the banks collated figures showing how much money was actually “lost” (viz: the amount that the banks ended up, usually anyway, reimbursing to the unfortunate customers who had been defrauded).

However, industry insiders say that about twice this amount is moved to another account but — and this is basically Very Good News — it is then transferred back so there is no actual loss to anyone. We don’t know the exact figures here, because they are not collated and published.

Furthermore, the bank should also be measuring “money at risk” that is the total amount in the compromised accounts. If their security measures failed and criminals stole every last penny then these would be actual losses — an order of magnitude more, perhaps, than the published figures.

The Select Committee chairman is now writing to the banks to ask if this is all true and what the “true” fraud figures might be. If the banks reply with detailed information then we might finally understand quite how difficult bank fraud is. I fully expect the story will run something along the lines that <n> accounts with 10,000 pounds in them are comprised, that the crooks fraudulently transfer 995 pounds from most, but not all of these <n> — but that half the time the fraudulent transaction is reversed.

If this analysis is correct then online banking fraud is a still, on average, much more lucrative than kidnapping — but we must make up our mind as to whether to measure it using the figures of 10,000 or 995 or “about half of 995 is permanently lost”. There’s justification to every way of measuring the problem — but it it’s important to understand the limitations of any single measurement; failure to do so will mean that the banks will not deploy the right level of security measures — and the politicians will fail to give the issue an appropriate level of  consideration.

Why password managers (sometimes) fail

We are asked to remember far too many passwords. This problem is most acute on the web. And thus, unsurprisingly, it is on the web that technical solutions have had most success in replacing users’ ad hoc coping strategies. One of the longest established and most widely adopted technical solutions is a password manager: software that remembers passwords and submits them on the user’s behalf. But this isn’t as straightforward as it sounds. In our recent work on bootstrapping adoption of the Pico system [1], we’ve come to appreciate just how hard life is for developers and maintainers of password managers.

In a paper we are about to present at the Passwords 2014 conference in Trondheim, we introduce our proposal for Password Manager Friendly (PMF) semantics [2]. PMF semantics are designed to give developers and maintainers of password managers a bit of a break and, more importantly, to improve the user experience.

Continue reading Why password managers (sometimes) fail

Spooks behaving badly

Like many in the tech world, I was appalled to see how the security and intelligence agencies’ spin doctors managed to blame Facebook for Lee Rigby’s murder. It may have been a convenient way of diverting attention from the many failings of MI5, MI6 and GCHQ documented by the Intelligence and Security Committee in its report yesterday, but it will be seriously counterproductive. So I wrote an op-ed in the Guardian.

Britain spends less on fighting online crime than Facebook does, and only about a fifth of what either Google or Microsoft spends (declaration of interest: I spent three months working for Google on sabbatical in 2011, working with the click fraud team and on the mobile wallet). The spooks’ approach reminds me of how Pfizer dealt with Viagra spam, which was to hire lawyers to write angry letters to Google. If they’d hired a geek who could have talked to the abuse teams constructively, they’d have achieved an awful lot more.

The likely outcome of GCHQ’s posturing and MI5’s blame avoidance will be to drive tech companies to route all the agencies’ requests past their lawyers. This will lead to huge delays. GCHQ already complained in the Telegraph that they still haven’t got all the murderers’ Facebook traffic; this is no doubt due to the fact that the Department of Justice is sitting on a backlog of requests for mutual legal assistance, the channel through which such requests must flow. Congress won’t give the Department enough money for this, and is content to play chicken with the Obama administration over the issue. If GCHQ really cares, then it could always pay the Department of Justice to clear the backlog. The fact that all the affected government departments and agencies use this issue for posturing, rather than tackling the real problems, should tell you something.

WEIS 2015 call for papers

The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.

We hope to see lots of you in Delft!

Economics of Cybersecurity MOOC

Security economics is a thriving research discipline, kicked off in 2001 with Ross Anderson’s seminal paper. There has been an annual workshop since 2002. In recent years there has also been an effort to integrate some of the key concepts and findings into course curricula, including in the Part II Security course at Cambridge and my own course at SMU.

We are pleased to announce that on 20 January 2015, we will launch an online course on the Economics of Cybersecurity, as part of edX Professional Education. The course provides a thorough introduction to the field, delivered by leading researchers from Delft University of Technology, University of Cambridge, University of Münster and Southern Methodist UniversityContinue reading Economics of Cybersecurity MOOC

Nikka – Digital Strongbox (Crypto as Service)

Imagine, somewhere in the internet that no-one trusts, there is a piece of hardware, a small computer, that works just for you. You can trust it. You can depend on it. Things may get rough but it will stay there to get you through. That is Nikka, it is the fixed point on which you can build your security and trust. [Now as a Kickstarter project]

You may remember our proof-of-concept implementation of a password protection for servers – Hardware Scrambling (published here in March). The password scrambler was a small dongle that could be plugged to a Linux computer (we used Raspberry Pi). Its only purpose was to provide a simple API for encrypting passwords (but it could be credit cards or anything else up to 32 bytes of length). The beginning of something big?

It received some attention (Ars Technica, Slashdot, LWN, …), certainly more than we expected at the time. Following discussions have also taught us a couple of lessons about how people (mostly geeks in this contexts) view security – particularly about the default distrust expressed by those who discussed articles describing our password scrambler.

We eventually decided to build a proper hardware cryptographic platform that could be used for cloud applications. Our requirements were simple. We wanted something fast, “secure” (CC EAL5+ or even FIPS140-2 certified), scalable, easy to use (no complicated API, just one function call) and to be provided as a service so no-one has to pay upfront the price of an HSM if they just want to have a go at using proper cryptography for their new or old application. That was the beginning of Nikka.

nikka_setup

This is our concept: Nikka comprises a set of powerful servers installed in secure data centres. These servers can create clusters delivering high-availability and scalability for their clients. Secure hardware forms the backbone of each server that provides an interface for simple use. The second part of Nikka are user applications, plugins, and libraries for easy deployment and everyday “invisible” use. Operational procedures, processes, policies, and audit logs then guarantee that what we say is actually being done.

2014-07-04 08.17.35We have been building it for a few months now and the scalable cryptographic core seems to work. We have managed to run long-term tests of 150 HMAC transactions per second (HMAC & RNG for password scrambling) on a small development platform while fully utilising available secure hardware. The server is hosted at ideaSpace and we use it to run functional, configuration and load tests.

We have never before designed a system with so many independent processes – the core is completely asynchronous (starting with Netty for a TCP interface) and we have quickly started to appreciate detailed trace logging we’ve implemented from the very beginning. Each time we start digging we find something interesting. Real-time visualisation of the performance is quite nice as well.
real_time_monitoring

Nikka is basically a general purpose cryptographic engine with middleware layer for easy integration. The password HMAC is this time used only as one of test applications. Users can share or reserve processing units that have Common Criteria evaluations or even FIPS140-2 certification – with possible physical hardware separation of users.

If you like what you have read so far, you can keep reading, watching, supporting at Kickstarter. It has been great fun so far and we want to turn it into something useful in 2015. If it sounds interesting – maybe you would like to test it early next year, let us know! @DanCvrcek

Pico part IV – Somethings you have

A light-hearted look at the ideas presented a couple of weeks ago in a paper at the UPSIDE workshop in Seattle.

USS Enterprise going even more boldly than usual

One of the problems inherent in boldly going where no man has gone before is that, more often than you might imagine, you may be required to blow up the spaceship on which you’re travelling. James T Kirk was forced to do this at least once, and he and his successors came perilously close to it on several other occasions.

But who gets to make this decision? Given the likelihood that some members of the crew may be possessed by alien life-forms at the time, it seems unwise to leave such decisions to any single individual, even if he’s the captain. And you also can’t require any specific other staff-member to back him up, since they may have had to sacrifice themselves earlier for the good of the many. Auto-destruct sequences, therefore, can typically only be initiated when any three senior officers agree and give their secret passwords.

It is a wonderful sign of the times that, when the first Star Trek episode in which this happens was broadcast in 1969, the passwords required to detonate a spaceship with 400 passengers on board were noticeably weaker than those now required to log in to a typical Kickstarter project.

Secret sharing – background

Some of the most beautiful examples, I believe, of using a simple, readily-understandable idea to solve a complex problem can be found in the secret-sharing systems proposed almost simultaneously by Adi Shamir and George Blakley, way back in 1979. These are just the sort of thing you need to need to control self-destruct sequences, and are certainly better than the four-character passwords used by Kirk and Scotty.

Most readers of LBT will be familiar with this, but in case you aren’t, the underlying concept is to encode the secret you need to protect – for example, the auto-destruct code – as the coefficients of a particular quadratic equation. If you know any three points on the curve, you can deduce the underlying equation. So all you have to do is give each of Kirk, Scotty, Spock and Bones the coordinates of a point, and any three of these ’shares’ can then unlock the secret which will trigger the detonation. If you want to insist on four or more officers, you use a higher-order polynomial. This is Shamir’s algorithm; Blakley’s is similar, but whichever you use, there is, I think, a real elegance in solving a difficult technical problem using a concept that can be explained in a paragraph to anyone with high-school level maths.

A somewhat more down-to-earth example of its use can be found in the DNSSEC system used to secure the core servers of the DNS, on which so much of the internet depends. The master key needed to reset this, in the event of some global calamity, is divided up between seven keyholders based in different countries. One of them told me over a beer recently that it required any five of them to get together in the US to be able to reconstruct the system.

PICO

In the Pico project (see earlier posts), we’re exploring the same secret-sharing concepts but at a personal rather than a global or galactic level.

The idea is that your Pico device, which provides authentication on your behalf to a wide variety of systems, should only be able to do so when it is confident that you are present. There are several ways it might detect that – biometrics perhaps being the most obvious – but we wanted a system that would be completely automatic, continuous and non-intrusive: it wouldn’t require you to re-scan your retina every few minutes, for example.

So the Pico assumes that you are present only if it can detect, nearby, a sufficient number of other devices that you normally carry. These ‘Picosiblings’ may be special devices dedicated to this purpose – bluetooth-enabled cufflinks or earrings, for example – or the Picosibling functionality may be built into phones, smart watches, laptops and car key-fobs. But, together, a sufficient number of them constitute an ‘aura’ of safety in which the Pico feels comfortable about releasing your credentials when you use it to log in.

You could just program the Pico to make this decision, but a more secure approach is to use secret-sharing. Your Pico stores your credentials in encrypted form, and the Picosiblings actually enable it by each giving up a share of the secret used to do the encryption. Since this information is not cached, even the Pico cannot decide to expose the information when you are not around.

Some challenges

This basic Picosibling concept is not bad, but can we improve it? How well can we model real-world user behaviour using these ideas? Where might they fall down, and do we need to invent anything new to deal with the edge cases? And can it help us make better decisions about when to blow up a starship?

Let’s consider some scenarios.

1. My Precious

My car keys are with me about 30% of the time, and may occasionally be with someone else. My wedding ring, on the other hand, has been a 100% reliable indicator of my presence for the last 23 years. (It’s too bad my wife didn’t give me a Bluetooth-enabled one.) We all have different possessions which may be more or less reliable indicators that we are around, and we can represent this by giving them different numbers of shares. Most of my Picosiblings might have one share, but my wallet and phone might have two, and my wedding ring four. If my ring is absent, you’ll need significantly more confirmation from other sources, in the same way that you might need disproportionately more senior officers if the captain is absent.

2. Smarter Picosiblings

Is my watch a good indicator of my presence? Well, yes, when it’s with me, but not when it’s sitting on my bedside table. There may be occasions when Picosiblings are more or less confident that you are present. So we can give each sibling a certain number of shares, but it may not choose to release them to the Pico in all situations. A sibling that detects and recognises your heartbeat might give out differing numbers of its shares based on how confident it is about its recognition. Something you normally wear or carry might include an accelerometer, and give out fewer shares if it hasn’t moved in the last few minutes. The number of shares might decay over time, as the device gets further from the moment at which it was confident of your presence, so a fingerprint sensor might be sufficient to unlock your Pico on its own, if you’ve used it in the last ten seconds. Another metric might be proximity: if the Picosibling could detect the Pico was close by, it might be willing to hand over more shares than if it were on the other side of the room.

3. The trouble with Klingons

But suppose we need something more sophisticated than simply the raw number of shares to unlock the secret? If your starship has a large number of Klingon officers, who place a high value on dying honourably in battle, you might wish to insist that at least two races were involved in any major decision like this.

Fortunately we don’t need a new mechanism: we can just use our current system twice over. We split the core secret into several shares: one for humans, one for Vulcans, one for Klingons, one for Betazoids. Each of those shares can then be subdivided in the same way for the individuals concerned. Two or more Vulcans are needed to reconstruct the Vulcan share, and two or more such species actually to blow up the ship.

In the Pico world, suppose your shoes are all Picosiblings. If you have many pairs of shoes, someone entering your bedroom may find they have plenty of shares even if you aren’t around. We can use this method of creating ‘categories of shares’ to limit the effect that a particular class of device can have, to ensure that shoes alone can never be sufficient to do the unlocking.

4. Greater than the sum of its parts

Suppose my my car, my car keys, and my house keys are all Picosiblings, and they each have one share. Anyone inside my car is likely to have my car keys, including a thief who has just found them on the pavement. But we could reasonably argue that someone who is inside my car and has my house keys is more likely to be me; that a stranger is less likely to have this particular combination of Picosiblings, and so their conjunction should be worth more than might be indicated by a simple addition of their values.

We can achieve this in various ways; a simple approach is just to cut a share in half and give one half to each sibling. These would be sent to the Pico along with the complete shares, and if it receives matching halves from two devices, it would be able to construct extra shares which correspond to the value of their relationship.

5. You can’t take that away from me

Here’s a challenge for readers. Can you think of a good way to implement negative shares? Let me explain…

Suppose you are in an airport check-in queue and a significant number of your worldly possessions are in the suitcase beside you. Someone who pinches your Pico from your pocket will have plenty of shares accessible either by standing behind you, or by getting close to your suitcase at some later point in the baggage-handling process. Travelling is one of those situations when you might feel a bit vulnerable and so require more confirmation of your presence than you would in your home country. If your suitcase could emit negative shares, then it would raise the threshold of affirmation needed by your Pico when it was around.

There are many challenges here, both technical and practical: for example, anyone who knows that there is a negative influence nearby may be able to remove it, e.g. by tipping the items out of the suitcase. But it would at least deter subtle unobserved attacks in the check-in queue. It would be good if observers, and ideally the Pico itself, could not distinguish positive from negative shares except with regards to their final effect. And so on. As I said: a challenge for the reader!

Sharing options

So we’ve introduced several ideas which can help with some real-life scenarios, yet they’re all based on the same simple secret-sharing concept:

  • Variable numbers of shares per device (based on how reliable each device is as an indicator)
  • Dynamically-changing number of shares (allowing devices to indicate their confidence in your or the Pico’s presence)
  • Categories of shares (allowing us to restrict the influence of a single class of devices)
  • Split shares (allowing combinations of devices to contribute more than simply the sum of their shares)
  • Negative shares (to allow some devices to indicate situations of vulnerability)

Extending the tree

Let me finish with one last question. To what extent are Picos and Picosiblings fundamentally different? Picos gain confidence to submit your credentials to a service based on the reassurance they get from Picosiblings. But we’ve also discussed the idea of Picosiblings getting confidence to submit their shares to a Pico based on other factors in the environment. Perhaps we could extend this hierarchy in the other direction, too.

A company might need k-of-n shares from the major investors’ Picos to appoint their representative director to the board. The company Pico might then need k-of-n of the directors’ Picos to authorise a major decision or transaction. And a commercial building might need k-of-n of the companies’ boards to agree to the resurfacing of the parking lot. This could be extended to political, national, even global decisions.

Perhaps I’m pushing the simple secret-sharing concept too far. But at the very least it’s an interesting thought experiment to consider how much we can represent real-world situations with this one idea.

After all, we need a simple, reliable and secure way to make these decisions in our own back yard, before we start interacting with the rest of the universe.

Passwords’14 in Trondheim

Passwords^14 is part of a lively and entertaining conference series started by password hacker Per Thorsheim in 2010 and devoted solely to passwords.

Up to now it was mostly invited talks and hacks but this year we’re also having scientific papers, which will be peer-reviewed. Proceedings will be published in Springer LNCS. Submission deadline 27 October 2014.

Call for papers: http://passwords14.com/cfp.pdf

Conference site: https://passwordscon.org/

See you in Trondheim, Norway, 8-10 December 2014.

Design and Implementation of the FreeBSD Operating System, Second Edition now shipping

Kirk McKusick, George Neville-Neil, and I are pleased to announce that The Design and Implementation of the FreeBSD Operating System, Second Edition is now available from Pearson Education (Amazon link for non-US folk). Light Blue Touchpaper readers might be particularly interested in the new chapter on FreeBSD’s kernel security features including:

  • Process Credentials
  • Users and Groups
  • Privilege Model
  • Interprocess Access Control
  • Discretionary Access Control
  • Capsicum Capability Model
  • Jails
  • Mandatory Access-Control Framework
  • Security Event Auditing
  • Cryptographic Services
  • GELI Full-Disk Encryption

There is detailed coverage of the FreeBSD TCB, POSIX.1e and NFSv4 ACLs, OS sandboxing features, the Mandatory Access Control Framework used not just in FreeBSD but also Junos/Mac OS X/iOS, the FreeBSD kernel’s Yarrow-based pseudo-random number generator, and both confidentiality and integrity cryptographic protection for filesystems, and the kernel’s IPsec implementation. Other new content in this edition of the book includes ZFS, paravirtualised device drivers, DTrace, NFSv4, network-stack virtualisation, and much more.

We will be using this book as one of the core texts for our new masters-level operating-system course at Cambridge, L41, in spring 2015.