Today we release a paper on security protocols and evidence which analyses why dispute resolution mechanisms in electronic systems often don’t work very well. On this blog we’ve noted many many problems with EMV (Chip and PIN), as well as other systems from curfew tags to digital tachographs. Time and again we find that electronic systems are truly awful for courts to deal with. Why?
The main reason, we observed, is that their dispute resolution aspects were never properly designed, built and tested. The firms that delivered the main production systems assumed, or hoped, that because some audit data were available, lawyers would be able to use them somehow.
As you’d expect, all sorts of things go wrong. We derive some principles, and show how these are also violated by new systems ranging from phone banking through overlay payments to Bitcoin. We also propose some enhancements to the EMV protocol which would make it easier to resolve disputes over Chip and PIN transactions.
Update (2013-03-07): This post was mentioned on Bruce Schneier’s blog, and this is some good discussion there.
Update (2014-03-03): The slides for the presentation at Financial Cryptography are now online.
February 5th, 2014 at 07:01 UTC
If you listen to Radio 4 from 0810 on BBC iPlayer, you’ll hear a debate between Phil Booth of MedConfidential and Tim Kelsey of NHS England – the guy driving the latest NHS data grab.
Tim Kelsey made a number of misleading claims. He claimed for example that in 25 years there had never been a single case of patient confidentiality compromise because of the HES data kept centrally on all hospital treatments. This was untrue. A GP practice manager, Helen Wilkinson, was stigmatised as an alcoholic on HES because of a coding error. She had to get her MP to call a debate in Parliament to get this fixed (and even after the minister promised it had been fixed, it hadn’t been; that took months more pushing).
Second, when Tim pressed Phil for a single case where data had been compromised, Phil said “Gordon Brown”. Kelsey’s rebuttal was “That was criminal hacking.” Again, this was untrue; Gordon Brown’s information was accessed by Andrew Jamieson, a doctor in Dunfermline, who abused his authorised access to the system. He was not prosecuted because this was not in the public interest. Yeah, right. And now Kelsey is going to give your GP records not just to almost everyone in the NHS but to university researchers (I have been offered access though I’m not even a medic and despite the fact that academics have lost millions of records in the past), to drug firms like GlaxoSmithKline, and even to Silicon-Valley informatics companies such as 23andme.
February 4th, 2014 at 09:04 UTC
Today Robert Brady and I publish a paper that solves an outstanding problem in physics. We explain the beautiful bouncing droplet experiments of Yves Couder, Emmanuel Fort and their colleagues.
For years now, people interested in the foundations of physics have been intrigued by the fact that droplets bouncing on a vibrating tray of fluid can behave in many ways like quantum mechanical particles, with single-slit and double-slit diffraction, tunneling, Anderson localisation and quantised orbits.
In our new paper, Robert Brady and I explain why. The wave field surrounding the droplet is, to a good approximation, Lorentz covariant with the constant c being the speed of surface waves. This plus the inverse square force between bouncing droplets (which acts like the Coulomb force) gives rise to an analogue of the magnetic force, which can be observed clearly in the droplet data. There is also an analogue of the Schrödinger equation, and even of the Pauli exclusion principle.
These results not only solve a fascinating puzzle, but might perhaps nudge more people to think about novel models of quantum foundations, about which we’ve written three previous papers.
January 20th, 2014 at 07:01 UTC
The Privacy Enhancing Technologies Symposium (PETS) aims to advance the state of the art and foster a world-wide community of researchers and practitioners to discuss innovation and new perspectives.
PETS seeks paper submissions for its 14th event to be held in Amsterdam, Netherlands, July 16–18, 2014 (of which I am program chair). Papers should present novel practical and/or theoretical research into the design, analysis, experimentation, or fielding of privacy-enhancing technologies. While PETS has traditionally been home to research on anonymity systems and privacy-oriented cryptography, we strongly encourage submissions in a number of both well-established and some emerging privacy-related topics.
Abstracts should be submitted by 10 February 2014, with full papers submitted by 13 February 2014. For further details, see the call for papers.
January 17th, 2014 at 15:59 UTC
The next three weeks will see a leaflet drop on over 20 million households. NHS England plans to start uploading your GP records in March or April to a central system, from which they will be sold to a wide range of medical and other research organisations. European data-protection and human-rights laws demand that we be able to opt out of such things, so the Information Commissioner has told the NHS to inform you of your right to opt out.
Needless to say, their official leaflet is designed to cause as few people to opt out as possible. It should really have been drafted like this. (There’s a copy of the official leaflet at the MedConfidential.org website.) But even if it had been, the process still won’t meet the consent requirements of human-rights law as it won’t be sent to every patient. One of your housemates could throw it away as junk before you see it, and if you’ve opted out of junk mail you won’t get a leaflet at all.
Yet if you don’t opt out in the next few weeks your data will be uploaded to central systems and you will not be able to get it deleted, ever. If you don’t opt out your kids in the next few weeks the same will happen to their data, and they will not be able to get their data deleted even if they decide they prefer privacy once they come of age. If you opted out of the Summary Care Record in 2009, that doesn’t count; despite a ministerial assurance to the contrary, you now need to opt out all over again. For further information see the website of GP Neil Bhatia (who drafted our more truthful leaflet) and previous LBT posts on medical privacy.
January 8th, 2014 at 22:23 UTC
When I read about cryptography before computers, I sometimes wonder why people did this and that instead of something a bit more secure. We may ridicule portable encryption systems based on monoalphabetic or even simple polyalphabetic ciphers but we may also change our opinion after actually trying it for real.
January 6th, 2014 at 12:32 UTC
David Modic and I have just published a paper on The psychology of malware warnings. We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?
To our surprise, social cues didn’t seem to work. What works best is to make the warning concrete; people ignore general warnings such as that a web page “might harm your computer” but do pay attention to a specific one such as that the page would “try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”. There is also some effect from appeals to authority: people who trust their browser vendor will avoid a page “reported and confirmed by our security team to contain malware”.
We also analysed who turned off browser warnings, or would have if they’d known how: they were people who ignored warnings anyway, typically men who distrusted authority and either couldn’t understand the warnings or were IT experts.
January 3rd, 2014 at 19:20 UTC
Passwords have not really changed since they were first used. Let’s go down the memory lane a bit and then analyse how password systems work and how they could be improved. You may say – forget passwords, OTP is the way forward. My next question would then be: So why do we use OTP in combination with passwords, when they are so good?
January 2nd, 2014 at 12:33 UTC
We had a crypto festival in London in London in November at which a number of cryptographers and crypto policy folks got together with over 1000 mostly young attendees to talk about what might be done in response to the Snowden revelations.
Here is a video of the session in which I spoke. The first speaker was Annie Machon (at 02.35) talking of her experience of life on the run from MI5, and on what we might do to protect journalists’ sources in the future. I’m at 23.55 talking about what’s changed for governments, corporates, researchers and others. Nick Pickles of Big Brother Watch follows at 45.45 talking on what can be done in terms of practical politics; it turned out that only two of us in the auditorium had met our MPs over the Comms Data Bill. The final speaker, Smari McCarthy, comes on at 56.45, calling for lots more encryption. The audience discussion starts at 1:12:00.
December 31st, 2013 at 18:24 UTC
It’s been a busy year for Capsicum, practical capabilities for UNIX, so a year-end update seemed in order:
The FreeBSD Foundation and Google jointly funded a Capsicum Integration Project that took place throughout 2013 — described by Foundation project technical director Ed Maste in a recent blog article. Pawel Jakub Dawidek refined several Capsicum APIs, improving support for ioctls and increasing the number of supported capability rights for FreeBSD 10. He also developed Casper, a helper daemon that provides services (such as DNS, access to random numbers) to sandboxes — and can, itself, sandbox services. Casper is now in the FreeBSD 11.x development branch, enabled by default, and should appear in FreeBSD 10.1. The Google Open Source Program Office (OSPO) blog also carried a September 2013 article on their support for open-source security, featuring Capsicum.
Capsicum is enabled by default in the forthcoming FreeBSD 10.0 release — capability mode, capabilities, and process descriptors are available in the out-of-the-box GENERIC kernel. A number of system services use Capsicum to sandbox themselves — such as the DHCP client, high-availability storage daemon, audit log distribution daemon, but also command-line tools like kdump and tcpdump that handle risky data. Even more will appear in FreeBSD 10.1 next year, now that Casper is available.
David Drysdale at Google announced Capsicum for Linux, an adaptation of Linux to provide Capsicum’s capability mode and capabilities, in November 2013. David and Ben Laurie visited us in Cambridge multiple times this year to discuss the design and implementation, review newer Capsicum APIs, and talk about future directions. They hope to upstream this work to the Linux community. Joris Giovannangeli also announced an adaptation of Capsicum to DragonFlyBSD in October 2013.
Over the summer, Mariusz Zaborski and Daniel Peryolon were funded by Google Summer of Code to work on a variety of new Capsicum features and services, adapting core UNIX components and third-party applications to support sandboxing. For example, Mariusz looked at sandboxing BSD grep: if a vulnerability arises in grep’s regular-expression matching, why should processing a file of malicious origin yield full rights to your UNIX account?
In May 2013, our colleagues at the University of Wisconsin, Madison, led by Bill Harris, published a paper at the IEEE Symposium on Security and Privacy (“Oakland”) on “Declarative, Temporal, and Practical Programming with Capabilities” — how to model program behaviour, and automatically transform some classes of applications to use Capsicum sandboxing. We were very pleased to lend a hand with this work, and feel the art of programming for compartmentalisation is a key research challenge. We also collaborated with folk at SRI and Google on a a workshop paper developing our ideas about application compartmentalisation, which appeared at the Security Protocols Workshop here in Cambridge in March 2013.
Google and the FreeBSD Foundation are committed to further work on Capsicum and its integration with applications, and research continues on how to apply Capsicum at several institutions including here at Cambridge. We hope to kick off a new batch of application adaptation in coming months — as well as integration with features such as DNSSEC. However, we also need your help in adapting applications to use Capsicum on systems that support it!
December 20th, 2013 at 23:02 UTC