Category Archives: Security economics

Social-science angles of security

Video on Edge

John Brockman of Edge interviewed me in London in March. The video of the interview, and a transcript, are now available on the Edge website. Edge runs big interviews with several dozen scientists a year, with particular interest in people who do cross-disciplinary work. For me, the interaction of economics, psychology and engineering is one of the things that makes security so fascinating, as well as the creativity driven by adversarial behaviour.

The topics covered include the last thirty years of progress (of lack of it) in information security, from the early beginnings, through the crypto wars and crime moving online, to the economics of security. We talked about how cryptography can help less developed countries; about managing complexity in big projects; about how network effects lead firms to design insecure products; about whether big data can undermine democracy by empowering elites; and about how in a future world of intelligent things, security may become more about safety than anything else. Finally I talk about our current big project, the Cambridge Cybercrime Centre.

John runs a literary agency, and he’s worked on books by many of the scientists who feature on his site. This makes me wonder: on what topic should I write my next book?

Banks biased against black fraud victims

The following is an op-ed I wrote in today’s Times. It appeared in their Thunderer column.

You’re less likely to be treated fairly by your bank if you’re elderly, poor, female or black. We’ve suspected this for years, and finally The Times has dug up the numbers to prove it.

Fraud victims who’re refused compensation often contact our security research group at Cambridge after they find we work on payment fraud. We call this stream of complaints our ‘fraud telescope’ as it gives us early warning of what the bad guys are up to. We’ve had more than 2,000 cases over 25 years.

In recent years we’ve started to realise what we weren’t seeing. The “dark matter” in the fraud universe is the missing victims: we don’t see that many middle-class white men. The victims who do come to us are disproportionately elderly, poor, female, or black. But crime surveys tell us that the middle classes and the young are more likely to be victims of fraud, so it’s hard to avoid the conclusion that banks are less generous to some of their customers.

We raised the issue of discrimination in 2011 with one of the banks and with the Commission for Racial Equality, but as no-one was keeping records, nothing could be proved, until today.

How can this discrimination happen? Well, UK rules give banks a lot of discretion to decide whether to refund a victim, and the first responders often don’t know the full story. If your HSBC card was compromised by a skimmer on a Tesco ATM, there’s no guarantee that Tesco will have told anyone (unlike in America, where the law forces Tesco to tell you). And the fraud pattern might be something entirely new. So bank staff end up making judgement calls like “Is this customer telling the truth?” and “How much is their business worth to us?” This in turn sets the stage for biases and prejudices to kick in, however subconsciously. Add management pressure to cut costs, sometimes even bonuses for cutting them, and here we are.

There are two lessons. First, banks need to train staff to be aware of unconscious bias (as universities do), and monitor their performance.

Second, the Financial Conduct Authority needs to protect all customers properly. It seems to be moving in the right direction; after the recent fraud against tens of thousands of Tesco Bank account holders, it said it expected fraud victims to be made good immediately. This has been the law in the USA since the 1980s and it must become a firm rule here too.

Security Economics MOOC

In two weeks’ time we’re starting an open course in security economics. I’m teaching this together with Rainer Boehme, Tyler Moore, Michel van Eeten, Carlos Ganan, Sophie van der Zee and David Modic.

Over the past fifteen years, we’ve come to realise that many information security failures arise from poor incentives. If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong. Security economics is now an important research topic: you can’t design secure systems involving multiple principals if you can’t get the incentives right. And it goes way beyond computer science. Without understanding how incentives play out, you can’t expect to make decent policy on cybercrime, on consumer protection or indeed on protecting critical national infrastructure

We first did the course last year as a paid-for course with EdX. Our agreement with them was that they’d charge for it the first time, to recoup the production costs, and thereafter it would be free.

So here it is as a free course. Spread the word!

Royal Society report on cybersecurity research

The Royal Society has just published a report on cybersecurity research. I was a member of the steering group that tried to keep the policy team headed in the right direction. Its recommendation that governments preserve the robustness of encryption is welcome enough, given the new Russian law on access to crypto keys; it was nice to get, given the conservative nature of the Society. But I’m afraid the glass is only half full.

I was disappointed that the final report went along with the GCHQ line that security breaches should not be reported to affected data subjects, as in the USA, but to the agencies, as mandated in the EU’s NIS directive. Its call for an independent review of the UK’s cybersecurity needs may also achieve little. I was on John Beddington’s Blackett Review five years ago, and the outcome wasn’t published; it was mostly used to justify a budget increase for GCHQ. Its call for UK government work on standards is irrelevant post-Brexit; indeed standards made in Europe will probably be better without UK interference. Most of all, I cannot accept the report’s line that the government should help direct cybersecurity research. Most scientists agree that too much money already goes into directed programmes and not enough into responsive-mode and curiosity-driven research. In the case of security research there is a further factor: the stark conflict of interest between bona fide researchers, whose aim is that some of the people should enjoy some security and privacy some of the time, and agencies engaged in programmes such as Operation Bullrun whose goal is that this should not happen. GCHQ may want a “more responsive cybersecurity agenda”; but that’s the last thing people like me want them to have.

The report has in any case been overtaken by events. First, Brexit is already doing serious harm to research funding. Second, Brexit is also doing serious harm to the IT industry; we hear daily of listings posptoned, investments reconsidered and firms planning to move development teams and data overseas. Third, the Investigatory Powers bill currently before the House of Lords highlights the fact that surveillance debate in the West these days is more about access to data at rest and about whether the government can order firms to hack their customers.

While all three arms of the US government have drawn back on surveillance powers following the Snowden revelations, Theresa May has taken the hardest possible line. Her Investigatory Powers Bill will give her successors as Home Secretary sweeping powers to order firms in the UK to hand over data and help GCHQ hack their customers. Brexit will shield these powers from challenge in the European Court of Justice, making it much harder for a UK company to claim “adequacy” for its data protection arrangements in respect of EU data subjects. This will make it still less attractive for an IT company to keep in the UK either data that could be seized or engineering staff who could be coerced. I am seriously concerned that, together with Brexit, this will be the double whammy that persuades overseas firms not to invest in the UK, and that even causes some UK firms to leave. In the face of this massive self-harm, the measures suggested by the report are unlikely to help much.

Inaugural Cybercrime Conference

The Cambridge Cloud Cybercrime Centre is organising an inaugural one day conference on cybercrime on Thursday, 14th July 2016.

In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but for this first year we have a stellar group of invited speakers who are at the forefront of their fields:

  • Adam Bossler, Associate Professor, Department of Criminal Justice and Criminology, Georgia Southern University, USA
  • Alice Hutchings, Post-doc Criminologist, Computer Laboratory, University of Cambridge, UK
  • David S. Wall, Professor of Criminology, University of Leeds, UK
  • Maciej Korczynski Post-Doctoral Researcher, Delft University of Technology, The Netherlands
  • Michael Levi, Professor of Criminology, Cardiff University, UK
  • Mike Hulett, Head of Operations, National Cyber Crime Unit, National Crime Agency, UK
  • Nicolas Christin, Assistant Research Professor of Electrical and Computer Engineering, Carnegie Mellon University, USA
  • Richard Clayton, Director, Cambridge Cloud Cybercrime Centre, University of Cambridge, UK
  • Ross Anderson, Professor of Security Engineering, Computer Laboratory, University of Cambridge, UK
  • Tyler Moore, Tandy Assistant Professor of Cyber Security & Information Assurance, University of Tulsa, USA

They will present various aspects of cybercrime from the point of view of criminology, security economics, cybersecurity governance and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Ninth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 12th and 13th July 2016.

For more details see here.

Cambridge and Brexit

If the UK leaves the European Union, it will cost Cambridge University about £100m, or about 10% of our turnover.

I present the details in an article today in the Cambridge News.

I reckon we will lose at least £60m of the £69m we get in European grants, at least £20m of our £237m fee income (most of which is from foreign students), at least £10m from Cambridge Assessment and Cambridge University Press, and £5m each from industry and charities. Although I’m an elected member of Council (the governing body) and the committee that sets the budget, all this comes from our published accounts.

And my estimates are conservative; the outcome could easily be worse, especially if foreign students desert us, or just can’t get visas after a popular vote against immigration.

Now everyone on Britain pays on average £4 a year to the EU and gets £2 back. The net contribution of £2 amounts to £12.5m for a town the size of Cambridge. The University alone is getting more than four times that back directly, and yet more indirectly. And the same goes for many other university towns too; even Newcastle gets more than would be raised by everyone in the city paying £2 a year.

But this is not just about money; it’s about who we are, and also about what other people perceive us to be. If Britain votes to leave Europe following a xenophobic campaign against immigrants, people overseas may conclude that Britain is to longer a cool place to study, or to start a research lab. Even some of the people already here will leave. We will do the best we can to keep the flame alight, but it will be very much harder for Cambridge to remain a world-leading university.

See also the Cambridge News editorial, and my piece yesterday on Brexit and tech.