Category Archives: News coverage

Media reports that may interest you

Symposium on Post-Bitcoin Cryptocurrencies

I am at the Symposium on Post-Bitcoin Cryptocurrencies in Vienna and will try to liveblog the talks in follow-ups to this post.

The introduction was by Bernhard Haslhofer of AIT, who maintains the graphsense.info toolkit and runs the Titanium project on bitcoin forensics jointly with Rainer Boehme of Innsbruck. Rainer then presented an economic analysis arguing that criminal transactions were pretty well the only logical app for bitcoin as it’s permissionless and trustless; if you have access to the courts then there are better ways of doing things. However in the post-bitcoin world of ICOs and smart contracts, it’s not just the anti-money-laundering agencies who need to understand cryptocurrency but the securities regulators and the tax collectors. Yet there is a real policy tension. Governments hype blockchains; Austria uses them to auction sovereign bonds. Yet the only way in for the citizen is through the swamp. How can the swamp be drained?

Privacy for Tigers

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos – and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

So we have been doing some work on this, and presented some initial ideas via an invited talk at Usenix Security in August. A video of the talk is now online.

The most serious poaching threats involve insiders: game guards who go over to the dark side, corrupt officials, and (now) the compromise of data and tools assembled for scientific and conservation purposes. Aggregation of data makes things worse; I might not care too much about a single geotagged photo, but a corpus of thousands of such photos tells a poacher where to set his traps. Cool new AI tools for recognising individual animals can make his work even easier. So people developing systems to help in the conservation mission need to start paying attention to computer security. Compartmentation is necessary, but there are hundreds of conservancies and game reserves, many of which are mutually mistrustful; there is no central authority at Fort Meade to manage classifications and clearances. Data sharing is haphazard and poorly understood, and the limits of open data are only now starting to be recognised. What sort of policies do we need to support, and what sort of tools do we need to create?

This is joint work with Tanya Berger-Wolf of Wildbook, one of the wildlife data aggregation sites, which is currently redeveloping its core systems to incorporate and test the ideas we describe. We are also working to spread the word to both conservators and online service firms.

BBC Horizon documentary: A Week without lying, the honesty experiment

Together with Ronald Poppe, Paul Taylor, and Gordon Wright, Sophie van der Zee (previously employed at the Cambridge Computer Laboratory), took a plunge and tested their automated lie detection methods in the real world. How well do the lie detection methods that we develop and test under very controlled circumstances in the lab, perform in the real world? And what happens to you and your social environment when you constantly feel monitored and attempt to live a truthful life? Is living a truthful life actually something we should desire? Continue reading BBC Horizon documentary: A Week without lying, the honesty experiment

The two-time pad: midwife of information theory?

The NSA has declassified a fascinating account by John Tiltman, one of Britain’s top cryptanalysts during world war 2, of the work he did against Russian ciphers in the 1920s and 30s.

In it, he reveals (first para, page 8) that from the the time the Russians first introduced one-time pads in 1928, they actually allowed these pads to be used twice.

This was still a vast improvement on the weak ciphers and code books the Russians had used previously. Tiltman notes ruefully that “We were hardly able to read anything at all except in the case of one or two very stereotyped proforma messages”.

Now after Gilbert Vernam developed encryption using xor with a key tape, Joseph Mauborgne suggested using it one time only for security, and this may have seemed natural in the context of a cable company. When the Russians developed their manual system (which may have been inspired by the U.S. work or a German one-time pad developed earlier in the 1920s) they presumably reckoned that using them twice was safe enough.

They were spectacularly wrong. The USA started Operation Venona in 1943 to decrypt messages where one-time pads had been reused, and this later became one of the first applications of computers to cryptanalysis, leading to the exposure of spies such as Blunt and Cairncross. The late Bob Morris, chief scientist at the NSA, used to warn us enigmatically of “The Two-time pad”. The story up till now was that the Russians must have reused pads under pressure of war, when it became difficult to get couriers through to embassies. Now it seems to have been Russian policy all along.

Many people have wondered what classified war work might have inspired Claude Shannon to write his stunning papers at the end of WW2 in which he established the mathematical basis of cryptography, and of information more generally.

Good research usually comes from real problems. And here was a real problem, which demanded careful clarification of two questions. Exactly why was the one-time pad good and the two-time pad bad? And how can you measure the actual amount of information in an English (or Russian) plaintext telegram: is it more or less than half the amount of information you might squeeze into that many bits? These questions are very much sharper for the two-time pad than for rotor machines or the older field ciphers.

That at least was what suddenly struck me on reading Tiltman. Of course this is supposition; but perhaps there are interesting documents about Shannon’s war work to be flushed out with freedom of information requests. (Hat tip: thanks to Dave Banisar for pointing us at the Tiltman paper.)

Responsible vulnerability disclosure in Europe

There is a report out today from the European economics think-tank CEPS on how responsible vulnerability disclosure might be harmonised across Europe. I was one of the advisers to this effort which involved not just academics and NGOs but also industry.

It was inspired in part by earlier work reported here on standardisation and certification in the Internet of Things. What happens to car safety standards once cars get patched once a month, like phones and laptops? The answer is not just that safety becomes a moving target, rather than a matter of pre-market testing; we also need a regime whereby accidents, hazards, vulnerabilities and security breaches get reported. That will mean responsible disclosure not just to OEMs and component vendors, but also to safety regulators, standards bodies, traffic police, insurers and accident victims. If we get it right, we could have a learning system that becomes steadily safer and more secure. But we could also get it badly wrong.

Getting it might will involve significant organisational and legal changes, which we discussed in our earlier report and which we carry forward here. We didn’t get everything we wanted; for example, large software vendors wouldn’t support our recommendation to extend the EU Product Liability Directive to services. Nonetheless, we made some progress, so today’s report can be seen a second step on the road.

Failure to protect: kids’ data in school

If you care about children’s rights, data protection or indeed about privacy in general, then I’d suggest you read this disturbing new report on what’s happening in Britain’s schools.

In an ideal world, schools should be actively preparing pupils to be empowered citizens in a digital world that is increasingly riddled with exploitative and coercive systems. Instead, the government is forcing schools to collect data that are then sold or given to firms that exploit it, with no meaningful consent. There is not even the normal right to request subject access to you can check whether the information about you is right and have it corrected if it’s wrong.

Yet the government has happily given the Daily Telegraph fully-identified pupil information so that it can do research, presumably on how private schools are better than government ones, or how grammar schools are better than comprehensives. You just could not make this up.

The detective work to uncover such abuses has been done by the NGO Defenddigitalme, who followed up some work we did a decade and more ago on the National Pupil Database in our Database State report and our earlier research on children’s databases. Defenddigitalme are campaigning for subject access rights, the deletion of nationality data, and a code of practice. Do read the report and if you think it’s outrageous, write to your MP and say so. Our elected representatives make a lot of noise about protecting children; time to call them on it.

Happy Birthday FIPR!

On May 29th there will be a lively debate in Cambridge between people from NGOs and GCHQ, academia and Deepmind, the press and the Cabinet Office. Should governments be able to break the encryption on our phones? Are we entitled to any privacy for our health and social care records? And what can be done about fake news? If the Internet’s going to be censored, who do we trust to do it?

The occasion is the 20th birthday of the Foundation for Information Policy Research, which was launched on May 29th 1998 to campaign against what became the Regulation of Investigatory Powers Act. Tony Blair wanted to be able to treat all URLs as traffic data and collect everyone’s browsing history without a warrant; we fought back, and our “big browser” amendment defined traffic data to be only that part of the URL needed to identify the server. That set the boundary. Since then, FIPR has engaged in research and lobbying on export control, censorship, health privacy, electronic voting and much else.

After twenty years it’s time to take stock. It’s remarkable how little the debate has shifted despite everything moving online. The police and spooks still claim they need to break encryption but still can’t support that with real evidence. Health administrators still want to sell our medical records to drug companies without our consent. Governments still can’t get it together to police cybercrime, but want to censor the Internet for all sorts of other reasons. Laws around what can be said or sold online – around copyright, pornography and even election campaign funding – are still tussle spaces, only now the big beasts are Google and Facebook rather than the copyright lobby.

A historical perspective might perhaps be of some value in guiding future debates on policy. If you’d like to join in the discussion, book your free ticket here.

Don’t blame Cambridge for Facebook’s privacy crisis

Mark Zuckerberg tried to blame Cambridge University in his recent testimony before the US Senate, saying “We do need to understand whether there was something bad going on in Cambridge University overall, that will require a stronger action from us.”

The New Scientist invited me to write a rebuttal piece, and here it is.

Dr Kogan tried to get approval to use the data his company had collected from Facebook users in academic research. The psychology ethics committee refused permission, and when he appealed to the University Ethics Committee (declaration: I’m a member) this refusal was upheld. Although he’d got consent from the people who ran his app, the same could not be said of their Facebook “friends” from whom most of the data were collected.

The deceptive behaviour here has been by Facebook, which creates the illusion of privacy in order to get its users to share more data. There has been a lot of work on the economics and psychology of privacy over the past decade and we now understand the dynamics of advertising markets better than we used to.

One big question is the “privacy paradox”. Why do people say they care about privacy, yet behave otherwise? Part of the answer is about context; and part of it is about learning. Over time, more and more people are starting to pay attention to online privacy settings, despite attempts by Facebook and other online advertising firms to keep changing privacy settings to confuse people.

With luck, the Facebook scandal will be a “flashbulb moment” that will drive lots more people to start caring about their privacy online. It will certainly provide interesting new data to privacy researchers.

Making security sustainable

Making security sustainable is a piece I wrote for Communications of the ACM and has just appeared in the Privacy and security column of their March issue. Now that software is appearing in durable goods, such as cars and medical devices, that can kill us, software engineering will have to come of age.

The notion that software engineers are not responsible for things that go wrong will be laid to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve. And as security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection.

Perhaps the biggest challenge will be durability. At present we have a hard time patching a phone that’s three years old. Yet the average age of a UK car at scrappage is about 14 years, and rising all the time; cars used to last 100,000 miles in the 1980s but now keep going for nearer 200,000. As the embedded carbon cost of a car is about equal to that of the fuel it will burn over its lifetime, we just can’t afford to scrap cars after five years, as do we laptops.

For durable safety-critical goods that incorporate software, the long-term software maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost.

This paper follows on from our earlier work for the European Commission on what happens to safety regulation in the future Internet of Things.

Is this research ethical?

The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.

Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.

Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.

Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.

What do LBT readers think?