Today sees the publication of a report I helped to write for the Nuffield Bioethics Council on what happens to medical ethics in a world of cloud-based medical records and pervasive genomics.
As the information we gave to our doctors in private to help them treat us is now collected and treated as an industrial raw material, there has been scandal after scandal. From failures of anonymisation through unethical sales to the care.data catastrophe, things just seem to get worse. Where is it all going, and what must a medical data user do to behave ethically?
We put forward four principles. First, respect persons; do not treat their confidential data like were coal or bauxite. Second, respect established human-rights and data-protection law, rather than trying to find ways round it. Third, consult people who’ll be affected or who have morally relevant interests. And fourth, tell them what you’ve done – including errors and security breaches.
The collection, linking and use of data in biomedical research and health care: ethical issues took over a year to write. Our working group came from the medical profession, academics, insurers and drug companies. We had lots of arguments. But it taught us a lot, and we hope it will lead to a more informed debate on some very important issues. And since medicine is the canary in the mine, we hope that the privacy lessons can be of value elsewhere – from consumer data to law enforcement and human rights.
I will be trying to liveblog Financial Cryptography 2015.
The opening keynote was by Gavin Andresen, chief scientist of the Bitcoin Foundation, and his title was “What Satoshi didn’t know.” The main unknown six years ago when bitcoin launched was whether it would bootstrap; Satoshi thought it might be used as a spam filter or a practical hashcash. In reality it was someone buying a couple of pizzas for 10,000 bitcoins. Another unknown when Gavin got involved in 2010 was whether it was legal; if you’d asked the SEC then they might have classified it as a Ponzi scheme, but now their alerts are about bitcoin being used in Ponzi schemes. The third thing was how annoying people can be on the Internet; people will abuse your system for fun if it’s popular. An example was penny flooding, where you send coins back and forth between your sybils all day long. Gavin invented “proof of stake”; in its early form it meant prioritising payers who turn over coins less frequently. The idea was that scarcity plus utility equals value; in addition to the bitcoins themselves, another scarce resources emerges as the old, unspent transaction outputs (UTXOs). Perhaps these could be used for further DoS attack prevention or a pseudonymous identity anchor.
It’s not even clear that Satoshi is or was a cryptographer; he used only ECC / ECDSA, hashes and SSL (naively), he didn’t bother compressing public keys, and comments suggest he wasn’t up on the latest crypto research. In addition, the rules for letting transactions into the chain are simple; there’s no subtlety about transaction meaning, which is mixed up with validation and transaction fees; a programming-languages guru would have done things differently. Bitcoin now allows hashes of redemption scripts, so that the script doesn’t have to be disclosed upfront. Another recent innovation is using invertible Bloom lookup tables (IBLTs) to transmit expected differences rather than transmitting all transactions over the network twice. Also, since 2009 we have FHE, NIZLPs and SNARKs from the crypto research folks; the things on which we still need more research include pseudonymous identity, practical privacy, mining scalability, probabilistic transaction checking, and whether we can use streaming algorithms. In questions, Gavin remarked that regulators rather like the idea that there was a public record of all transactions; they might be more negative if it were completely anonymous. In the future, only recent transactions will be universally available; if you want the old stuff you’ll have to store it. Upgrading is hard though; Gavin’s big task this year is to increase the block size. Getting everyone in the world to update their software at once is not trivial. People say: “Why do you have to fix the software? Isn’t bitcoin done?”
I’ll try to blog the refereed talks in comments to this post.
TU Delft has just launched a massively open online course on security economics to which three current group members (Sophie van der Zee, David Modoc and I) have contributed lectures, along with one alumnus (Tyler Moore). Michel van Eeten of Delft is running the course (Delft does MOOCs while Cambridge doesn’t yet), and there are also talks from Rainer Boehme. This was pre-announced here by Tyler in November.
The videos will be available for free in April; if you want to take the course now, I’m afraid it costs $250. The deal is that EdX paid for the production and will sell it as a professional course to security managers in industry and government; once that’s happened we’ll make it free to all. This is the same basic approach as with my book: rope in a commercial publisher to help produce first-class content that then becomes free to all. But if your employer is thinking of giving you some security education, you could do a lot worse than to support the project and enrol here.
Like many in the tech world, I was appalled to see how the security and intelligence agencies’ spin doctors managed to blame Facebook for Lee Rigby’s murder. It may have been a convenient way of diverting attention from the many failings of MI5, MI6 and GCHQ documented by the Intelligence and Security Committee in its report yesterday, but it will be seriously counterproductive. So I wrote an op-ed in the Guardian.
Britain spends less on fighting online crime than Facebook does, and only about a fifth of what either Google or Microsoft spends (declaration of interest: I spent three months working for Google on sabbatical in 2011, working with the click fraud team and on the mobile wallet). The spooks’ approach reminds me of how Pfizer dealt with Viagra spam, which was to hire lawyers to write angry letters to Google. If they’d hired a geek who could have talked to the abuse teams constructively, they’d have achieved an awful lot more.
The likely outcome of GCHQ’s posturing and MI5’s blame avoidance will be to drive tech companies to route all the agencies’ requests past their lawyers. This will lead to huge delays. GCHQ already complained in the Telegraph that they still haven’t got all the murderers’ Facebook traffic; this is no doubt due to the fact that the Department of Justice is sitting on a backlog of requests for mutual legal assistance, the channel through which such requests must flow. Congress won’t give the Department enough money for this, and is content to play chicken with the Obama administration over the issue. If GCHQ really cares, then it could always pay the Department of Justice to clear the backlog. The fact that all the affected government departments and agencies use this issue for posturing, rather than tackling the real problems, should tell you something.
The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.
We hope to see lots of you in Delft!
As part of the Royal Society Summer Science Exhibition 2014, I spoke at the panel session “Privacy with technology: where do we go from here?”, along with Ross Anderson, and Bashar Nuseibeh with Jon Crowcroft as chair.
The audio recording is available and some notes from the session are below.
The session started with brief presentations from each of the panel members. Ross spoke on the economics of surveillance and in particular network effects, the topic of his paper at WEIS 2014.
Bashar discussed the difficulties of requirements engineering, as eloquently described by Billy Connolly. These challenges are particularly acute when it comes to designing for privacy requirements, especially for wearable devices with their limited ability to communicate with users.
I described issues around surveillance on the Internet, whether by governments targeting human rights workers or advertisers targeting pregnant customers. I discussed how anonymous communication tools, such as Tor, can help defend against such surveillance.
Continue reading Privacy with technology: where do we go from here?
In the latest edition of Communications of the ACM, Ross Anderson and I have an article in the Inside Risks column: “EMV: Why Payment Systems Fail” (DOI 10.1145/2602321).
Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).
While the EMV specification is the same across the world, they way each country uses it varies substantially. Even individual banks within a country may make different implementation choices which have an impact on security. The US will prove to be an especially interesting case study because some banks will be choosing Chip and PIN (as the UK has done) while others will choose Chip and Signature (as Singapore did). The US will act as a natural experiment addressing the question of whether Chip and PIN or Chip and Signature is better, and from whose perspective?
The US is also distinctive in that the major tussle over payment card security is over the “interchange” fees paid by merchants to the banks which issue the cards used. Interchange fees are about an order of magnitude higher than losses due to fraud, so while security is one consideration in choosing different sets of EMV features, the question of who pays how much in fees is a more important factor (even if the decision is later claimed to be justified by security). We’re already seeing results of this fight in the courts and through legislation.
EMV is coming to the US, so it is important that banks, customers, merchants and regulators know the likely consequences and how to manage the risks, learning from the lessons of the UK and elsewhere. Discussion of these and further issues can be found in our article.
I’m liveblogging WEIS 2014, as I did for WEIS 2013, 2012, 2011, 2010 and 2009. This is the thirteenth workshop on the economics of information security, and the sessions are being held today and tomorrow at Penn State. The panels and refereed paper sessions will be blogged in comments below this post.
Jim Graves, Alessandro Acquisti and I are giving a paper today at WEIS on Experimental Measurement of Attitudes Regarding Cybercrime, which we hope might nudge courts towards more rational sentencing for cybercrime.
At present, sentencing can seem somewhere between random and vindictive. People who commit a fraud online can get off with a tenth of what they’d get if they’d swindled the same amount of money face-to-face; yet people who indulge in political activism – as the Anonymous crowd did – can get hammered with much harsher sentences than they’d get for a comparable protest on the street.
Is this just the behaviour of courts and prosecutors, or does it reflect public attitudes?
We did a number of surveys of US residents and found convincing evidence that it’s the former. Americans want fraudsters to be punished on two criteria: for the value of the damage they do, with steadily tougher punishments for more damage, and for their motivation, where they want people who hack for profit to be punished more harshly than people who hack for political protest.
So Americans, thankfully, are rational. Let’s hope that legislators and prosecutors start listening to their voters.
Long time readers will recall that last year ICANN published the draft report of our study into the abuse of privacy and proxy services when registering domain names.
At WEIS 2014 I will present our academic paper summarising what we have found — and the summary (as the slides for the talk indicate) is very straightforward:
- when criminals register domain names for use in online criminality they don’t provide their names and addresses;
- we collected substantial data to show that this is generally true;
- in doing so we found that the way in which contact details are hidden varies somewhat depending upon the criminal activity and this gives new insights;
- meantime, people calling for changes to domain ‘privacy’ and ‘proxy’ services “because they are used by criminals” must understand:
- the impact of such a policy change on other registrants
- the limitations of such a policy change on criminals
To give just one example, the registrants of the domain names used for fake pharmacies are the group that uses privacy and proxy services the most (55%) : that’s because a key way in which such pharmacy domains are suppressed is to draw attention to invalid details having been provided when the domain was registered. Privacy and proxy services hide this fakery. In contrast, the registrants of domains that are used to supply child sexual images turn to privacy and proxy services just 29% of the time (only just higher than banks — 28%)… but drawing attention to fallacious registration details is not the approach that is generally taken for this type of content.
Our work provides considerable amounts of hard data to inform the debates around changing the domain Whois system to significantly improve accuracy and usefulness and to prevent misuse. Abolishing privacy and proxy services, if this was even possible, would affect a substantial amount of lawful activity — while criminals currently using these services might be expected to adopt the methods of their peers and instead provide incomplete and inaccurate data. However, insisting that domain registration data was always complete and accurate would mean a great many lawful registrations would need to be updated.