The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 12th July 2018.
We have a stellar group of invited speakers who are at the forefront of their fields:
- Dave Jevans, CipherTrace
- Gareth Tyson, Queen Mary University of London
- Marleen Weulen Kranenbarg, Vrije Universitaet Amsterdam
- Daniel R. Thomas, Cambridge Cybercrime Centre, University of Cambridge
- Giacomo Persi Paoli, RAND Europe
- David S. Wall, Centre for Criminal Justice Studies, University of Leeds
- J.J. Cardoso de Santanna, University of Twente
- Maria Bada, Global Cyber Security Capacity Centre, University of Oxford
- Sergio Pastrana, Cambridge Cybercrime Centre, University of Cambridge
- Andrew Caines, Faculty of Modern and Medieval Languages, University of Cambridge
- Richard Clayton, Cambridge Cybercrime Centre, University of Cambridge
They will present various aspects of cybercrime from the point of view of criminology, policy, security economics, law and industry.
This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “11th International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 10th and 11th July 2018.
Full details (and information about booking) is here.
I’m at the seventeenth workshop on the economics of information security, hosted by the University of Innsbruck. I’ll be liveblogging the sessions in followups to this post.
We have three open positions in the Cambridge Cybercrime Centre: https://www.cambridgecybercrime.uk.
We wish to fill at least one of the three posts with someone from a computer science, data science, or similar technical background.
BUT we’re not just looking for computer science people: to continue our multi-disciplinary approach, we wish to fill at least one of the three posts with someone from a criminology, sociology, psychology or legal background.
Details of the posts, and what we’re looking for are in the job advert here: http://www.jobs.cam.ac.uk/job/17827/.
Bitcoin Redux explains what’s going wrong in the world of cryptocurrencies. The bitcoin exchanges are developing into a shadow banking system, which do not give their customers actual bitcoin but rather display a “balance” and allow them to transact with others. However if Alice sends Bob a bitcoin, and they’re both customers of the same exchange, it just adjusts their balances rather than doing anything on the blockchain. This is an e-money service, according to European law, but is the law enforced? Not where it matters. We’ve been looking at the details.
In March we wrote about how to trace stolen bitcoin, describing new tools that enable us to track crime proceeds on the blockchain with more precision than before. We waited for victims of bitcoin theft and fraud to come to us, so we could test our tools on real cases. However in most of them it was not clear that the victims had ever owned any bitcoin at all.
There are basically three ways you could try to hold a bitcoin. You could buy one from an exchange and get them to send it to a wallet you host yourself, but almost nobody does that.
You could buy one from an exchange and get the exchange to keep the keys for you, so that the asset was unique to you and they were only guarding it for you – just like when you buy gold and the bullion merchant then charges you a fee to guard your gold in his vault. If the merchant goes bust, you can turn up at the vault with your receipt and demand your gold back.
Or you could buy one from an exchange and have them owe you a bitcoin – just as when you put your money in the bank. The bank doesn’t have a stack of banknotes in the vault with your name on it; and if it goes bust you have to stand in line with the other creditors.
It seems that most people who buy bitcoin think that they’re operating under the gold merchant model, while most exchanges operate under the bank model. This raises a whole host of issues around solvency, liquidity, accounting practices, money laundering, risk and trust. The details matter, and the more we look at them, the worse it seems.
This paper will appear at the Workshop on the Economics of Information Security later this month. It contains eight recommendations for what governments should be doing to clean up this mess.
I’m at the 2018 Workshop on Security and Human Behavior which is being held this year at Carnegie Mellon University. For background, the workshop liveblogs and websites from 2008–17 are linked here.
As usual, I will try to liveblog the sessions in followups to this post.
We’re delighted to announce that the new security lectureship we advertised has been offered to Alice Hutchings, and she’s accepted. We had 52 applicants of whom we shortlisted three for interview.
Alice works in the Cambridge Cybercrime Centre and her background is in criminology. Her publications are here. Her appointment will build on our strengths in research on cybercrime, and will complement and extend our multidisciplinary work in the economics and psychology of security.
Over the years, I’ve had friends and acquaintances ask me about unauthorised transactions for flight bookings made with their credit cards. The question is usually along the lines of, if the airlines know what flight is being travelled, why don’t the police go and meet the passenger?
This is a great question, but it’s often not quite so straightforward. Although Europol co-ordinates regular Global Airline Action Days, during which those travelling may be detained, this does not create disincentives for those actually obtaining the airline tickets.
A few years ago, Professor Nicolas Christin at Carnegie Mellon University mentioned to me that he was aware of cheap airline tickets being advertised on an online black market. This comment led to an in-depth research project, covering all corners of the globe, to understand how these tickets were being obtained, and why.
You can read more about my research here, including how some of these tickets are connected to other crime types, such as human smuggling and trafficking; theft (including pickpocketing and shoplifting from airport stores); smuggling cash and contraband, such as drugs, cigarettes and tobacco; facilitating money laundering (such as opening bank accounts in other countries); and credit card fraud, including making transactions with compromised cards, and operating skimmers.
On May 29th there will be a lively debate in Cambridge between people from NGOs and GCHQ, academia and Deepmind, the press and the Cabinet Office. Should governments be able to break the encryption on our phones? Are we entitled to any privacy for our health and social care records? And what can be done about fake news? If the Internet’s going to be censored, who do we trust to do it?
The occasion is the 20th birthday of the Foundation for Information Policy Research, which was launched on May 29th 1998 to campaign against what became the Regulation of Investigatory Powers Act. Tony Blair wanted to be able to treat all URLs as traffic data and collect everyone’s browsing history without a warrant; we fought back, and our “big browser” amendment defined traffic data to be only that part of the URL needed to identify the server. That set the boundary. Since then, FIPR has engaged in research and lobbying on export control, censorship, health privacy, electronic voting and much else.
After twenty years it’s time to take stock. It’s remarkable how little the debate has shifted despite everything moving online. The police and spooks still claim they need to break encryption but still can’t support that with real evidence. Health administrators still want to sell our medical records to drug companies without our consent. Governments still can’t get it together to police cybercrime, but want to censor the Internet for all sorts of other reasons. Laws around what can be said or sold online – around copyright, pornography and even election campaign funding – are still tussle spaces, only now the big beasts are Google and Facebook rather than the copyright lobby.
A historical perspective might perhaps be of some value in guiding future debates on policy. If you’d like to join in the discussion, book your free ticket here.
A new Computerphile video explains how we’ve worked out a much better way to track stolen bitcoin. Previous attempts to do this had got entangled in the problem of dealing with transactions that split bitcoin into change, or that consolidate smaller sums into larger ones, and with mining fees. The answer comes from an unexpected direction: a legal precedent in 1816. We discussed the technical details last week at the Security Protools Workshop; a preprint of our paper is here.
Previous attempts to track tainted coins had used either the “poison” or the “haircut” method. Suppose I open a new address and pay into it three stolen bitcoin followed by seven freshly-mined ones. Then under poison, the output is ten stolen bitcoin, while under haircut it’s ten bitcoin that are marked 30% stolen. After thousands of blocks, poison tainting will blacklist millions of addresses, while with haircut the taint gets diffused, so neither is very effective at tracking stolen property. Bitcoin due-diligence services supplant haircut taint tracking with AI/ML, but the results are still not satisfactory.
We discovered that, back in 1816, the High Court had to tackle this problem in Clayton’s case, which involved the assets and liabilities of a bank that had gone bust. The court ruled that money must be tracked through accounts on the basis of first-in, first out (FIFO); the first penny into an account goes to satisfy the first withdrawal, and so on.
Ilia Shumailov has written software that applies FIFO tainting to the blockchain and the results are impressive, with a massive improvement in precision. What’s more, FIFO taint tracking is lossless, unlike haircut; so in addition to tracking a stolen coin forward to find where it’s gone, you can start with any UTXO and trace it backwards to see its entire ancestry. It’s not just good law; it’s good computer science too.
We plan to make this software public, so that everybody can use it and everybody can see where the bad bitcoins are going.
I’m giving a further talk on Tuesday at a financial-risk conference in Paris.
This is the title of a paper that appeared today in PLOS One. It describes a tool we developed initially to assess the gullibility of cybercrime victims, and which we now present as a general-purpose psychometric of individual susceptibility to persuasion. An early version was described three years ago here and here. Since then we have developed it significantly and used it in experiments on cybercrime victims, Facebook users and IT security officers.
We investigated the effects on persuasion of a subject’s need for cognition, need for consistency, sensation seeking, self-control, consideration of future consequences, need for uniqueness, risk preferences and social influence. The strongest factor was consideration of future consequences, or “premeditation” for short.
We offer a full psychometric test in STP-II with 54 items spanning 10 subscales, and a shorter STP-II-B with 30 items to measure first-order factors, but that omits second-order constructs for brevity. The scale is here with the B items marked, and here is a live instance of the survey for you to play with. Once you complete it, there’s an on-the-fly interpretation at the end. You don’t have to give your name and we don’t record your IP address.
We invite everyone to use our STP-II scale – not just in security contexts, but also in consumer and marketing psychology and anywhere else it might possibly be helpful. Do let us know what you find!