Category Archives: Social networks

How Protocols Evolve

Over the last thirty years or so, we’ve seen security protocols evolving in different ways, at different speeds, and at different levels in the stack. Today’s TLS is much more complex than the early SSL of the mid-1990s; the EMV card-payment protocols we now use at ATMs are much more complex than the ISO 8583 protocols used in the eighties when ATM networking was being developed; and there are similar stories for GSM/3g/4g, SSH and much else.

How do we make sense of all this?

Reconciling Multiple Objectives – Politics or Markets? was particularly inspired by Jan Groenewegen’s model of innovation according to which the rate of change depends on the granularity of change. Can a new protocol be adopted by individuals, or does it need companies to adopt it en masse for internal use, or does it need to spread through a whole ecosystem, or – the hardest case of all – does it require a change in culture, norms or values?

Security engineers tend to neglect such “soft” aspects of engineering, and we probably shouldn’t. So we sketch a model of the innovation stack for security and draw a few lessons.

Perhaps the most overlooked need in security engineering, particularly in the early stages of a system’s evolution, is recourse. Just as early ATM and point-of-sale system operators often turned away fraud victims claiming “Our systems are secure so it must have been your fault”, so nowadays people who suffer abuse on social media can find that there’s nowhere to turn. A prudent engineer should anticipate disputes, and give some thought in advance to how they should be resolved.

Reconciling Multiple Objectives appeared at Security Protocols 2017. I forgot to put the accepted version online and in the repository after the proceedings were published in late 2017. Sorry about that. Fortunately the REF rule that papers must be made open access within three months doesn’t apply to conference proceedings that are a book series; it may be of value to others to know this!

Don’t blame Cambridge for Facebook’s privacy crisis

Mark Zuckerberg tried to blame Cambridge University in his recent testimony before the US Senate, saying “We do need to understand whether there was something bad going on in Cambridge University overall, that will require a stronger action from us.”

The New Scientist invited me to write a rebuttal piece, and here it is.

Dr Kogan tried to get approval to use the data his company had collected from Facebook users in academic research. The psychology ethics committee refused permission, and when he appealed to the University Ethics Committee (declaration: I’m a member) this refusal was upheld. Although he’d got consent from the people who ran his app, the same could not be said of their Facebook “friends” from whom most of the data were collected.

The deceptive behaviour here has been by Facebook, which creates the illusion of privacy in order to get its users to share more data. There has been a lot of work on the economics and psychology of privacy over the past decade and we now understand the dynamics of advertising markets better than we used to.

One big question is the “privacy paradox”. Why do people say they care about privacy, yet behave otherwise? Part of the answer is about context; and part of it is about learning. Over time, more and more people are starting to pay attention to online privacy settings, despite attempts by Facebook and other online advertising firms to keep changing privacy settings to confuse people.

With luck, the Facebook scandal will be a “flashbulb moment” that will drive lots more people to start caring about their privacy online. It will certainly provide interesting new data to privacy researchers.

Video on Edge

John Brockman of Edge interviewed me in London in March. The video of the interview, and a transcript, are now available on the Edge website. Edge runs big interviews with several dozen scientists a year, with particular interest in people who do cross-disciplinary work. For me, the interaction of economics, psychology and engineering is one of the things that makes security so fascinating, as well as the creativity driven by adversarial behaviour.

The topics covered include the last thirty years of progress (of lack of it) in information security, from the early beginnings, through the crypto wars and crime moving online, to the economics of security. We talked about how cryptography can help less developed countries; about managing complexity in big projects; about how network effects lead firms to design insecure products; about whether big data can undermine democracy by empowering elites; and about how in a future world of intelligent things, security may become more about safety than anything else. Finally I talk about our current big project, the Cambridge Cybercrime Centre.

John runs a literary agency, and he’s worked on books by many of the scientists who feature on his site. This makes me wonder: on what topic should I write my next book?

Our Christmas message for troublemakers: how to do anonymity in the real world

On the 5th of December I gave a talk at a journalists’ conference on what tradecraft means in the post-Snowden world. How can a journalist, or for that matter an MP or an academic, protect a whistleblower from being identified even when MI5 and GCHQ start trying to figure out who in Whitehall you’ve been talking to? The video of my talk is now online here. There is also a TV interview I did later, which can be found here, while the other conference talks are here.

Enjoy!

Ross

Spooks behaving badly

Like many in the tech world, I was appalled to see how the security and intelligence agencies’ spin doctors managed to blame Facebook for Lee Rigby’s murder. It may have been a convenient way of diverting attention from the many failings of MI5, MI6 and GCHQ documented by the Intelligence and Security Committee in its report yesterday, but it will be seriously counterproductive. So I wrote an op-ed in the Guardian.

Britain spends less on fighting online crime than Facebook does, and only about a fifth of what either Google or Microsoft spends (declaration of interest: I spent three months working for Google on sabbatical in 2011, working with the click fraud team and on the mobile wallet). The spooks’ approach reminds me of how Pfizer dealt with Viagra spam, which was to hire lawyers to write angry letters to Google. If they’d hired a geek who could have talked to the abuse teams constructively, they’d have achieved an awful lot more.

The likely outcome of GCHQ’s posturing and MI5’s blame avoidance will be to drive tech companies to route all the agencies’ requests past their lawyers. This will lead to huge delays. GCHQ already complained in the Telegraph that they still haven’t got all the murderers’ Facebook traffic; this is no doubt due to the fact that the Department of Justice is sitting on a backlog of requests for mutual legal assistance, the channel through which such requests must flow. Congress won’t give the Department enough money for this, and is content to play chicken with the Obama administration over the issue. If GCHQ really cares, then it could always pay the Department of Justice to clear the backlog. The fact that all the affected government departments and agencies use this issue for posturing, rather than tackling the real problems, should tell you something.

Security and Human Behaviour 2014

I’m liveblogging the Workshop on Security and Human Behaviour which is being held here in Cambridge. The participants’ papers are here and the programme is here. For background, see the liveblogs for SHB 2008-13 which are linked here and here. Blog posts summarising the talks at the workshop sessions will appear as followups below, and audio files will be here.