The emotional cost of cybercrime

We know more and more about the financial cost of cybercrime, but there has been very little work on its emotional cost. David Modic and I decided to investigate. We wanted to empirically test whether there are emotional repercussions to becoming a victim of fraud (Yes, there are). We wanted to compare emotional and financial impact across different categories of fraud and establish a ranking list (And we did). An interesting, although not surprising, finding was that in every tested category the victim’s perception of emotional impact outweighed the reported financial loss.

A victim may think that they will still be able to recover their money, if not their pride. That really depends on what type of fraud they facilitated. If it is auction fraud, then their chances of recovery are comparatively higher than in bank fraud – we found that 26% of our sample would attempt to recover funds lost in a fraudulent auction and approximately half of them were reimbursed (look at this presentation). There is considerable evidence that banks are not very likely to believe someone claiming to be a victim of, say, identity theft and by extension bank fraud. Thus, when someone ends up out of pocket, they will likely also go through a process of secondary victimisation where they will be told they broke some small-print rule like having the same pin for two of their bank cards or not using the bank’s approved anti-virus software, and are thus not eligible for any refund and it is all their own fault, really.

You can find the article here or here. (It was published in IEEE Security & Privacy.)

This paper complements and extends our earlier work on the costs of cybercrime, where we show that the broader economic costs to society of cybercrime – such as loss of confidence in online shopping and banking – also greatly exceed the amounts that cybercriminals actually manage to steal.

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.

Efficient multivariate statistical techniques for extracting secrets from electronic devices

That’s the title of my PhD thesis, supervised by Markus Kuhn, which has become available recently as CL tech report 878:

In this thesis I provide a detailed presentation of template attacks, which are considered the most powerful kind of side-channel attacks, and I present several methods for implementing and evaluating this attack efficiently in different scenarios.

These contributions may allow evaluation labs to perform their evaluations faster, show that we can determine almost perfectly an 8-bit target value even when this value is manipulated by a single LOAD instruction (may be the best published results of this kind), and show how to cope with differences across devices, among others.

Some of the datasets used in my experiments along with MATLAB scripts for reproducing my results are available here:


Ongoing badness in the RIPE database

A month ago I wrote about the presence of route objects for undelegated IPv4 address space within the RIPE database (strictly I should say RIPE NCC — the body who looks after this database).

The folks at RIPE NCC removed a number of these dubious route objects which had been entered by AS204224.

And they were put straight back again!

This continues to this day — it looks to me as if once the RIPE NCC staff go home for the evening the route objects are resurrected.

So for AS204224 (CJSC Mashzavod-Marketing-Servis) you can (at the moment of writing) find route objects for four /19s and two /21s which have a creation times between 17:53 and 17:55 this evening (2 November). This afternoon (in RIPE NCC working hours) there were no such route objects.

As an aside: as well as AS204224 I see route objects for undelegated space (these are all more recent than my original blog article) from:

    AS200439 LLC Stadis, Ekaterinburg, Russia
    AS204135 LLC Transmir, Blagoveshensk, Russia
    AS204211 LLC Aspect, Novgorod, Russia

I’d like to give a detailed account of the creation and deletion of the AS204224 route objects, but I don’t believe that there’s a public archive of RIPE database snapshots (you can find the latest snapshot taken at about 03:45 each morning at, but if you don’t download it that day then it’s gone!).

However, I have been collecting copies of the database for the past few days and the creation times for the route objects are:

    Thu 2015-10-29  18:03
    Fri 2015-10-30  15:01
    Sat 2015-10-31  17:54
    Sun 2015-11-01  18:31
    Mon 2015-11-02  17:53

There are two conclusions to draw from this: perhaps the AS204224 people only come out at night and dutifully delete their route objects when the sun rises before repeating the activity the following night (sounds like one of Grimm’s fairy tales doesn’t it?).

The alternative, less magical explanation, is that the staff at RIPE NCC are playing “whack-a-mole” INSIDE THEIR OWN DATABASE! (and although they work weekends, they go home early on Friday afternoons!)

Emerging, fascinating, and disruptive views of quantum mechanics

I have just spent a long weekend at Emergent Quantum Mechanics (EmQM15). This workshop is organised every couple of years by Gerhard Groessing and is the go-to place if you’re interested in whether quantum mechanics dooms us to a universe (or multiverse) that can be causal or local but not both, or whether we might just make sense of it after all. It’s held in Austria – the home not just of the main experimentalists working to close loopholes in the Bell tests, such as Anton Zeilinger, but of many of the physicists still looking for an underlying classical model from which quantum phenomena might emerge. The relevance to the LBT audience is that the security proofs of quantum cryptography, and the prospects for quantum computing, turn on this obscure area of science.

The two themes emergent from this year’s workshop are both relevant to these questions; they are weak measurement and emergent global correlation.

Weak measurement goes back to the 1980s and the thesis of Lev Vaidman. The idea is that you can probe the trajectory of a quantum mechanical particle by making many measurements of a weakly coupled observable between preselection and postselection operations. This has profound theoretical implications, as it means that the Heisenberg uncertainty limit can be stretched in carefully chosen circumstances; Masanao Ozawa has come up with a more rigorous version of the Heisenberg bound, and in fact gave one of the keynote talks two years ago. Now all of a sudden there are dozens of papers on weak measurement, exploring all sorts of scientific puzzles. This leads naturally to the question of whether weak measurement is any good for breaking quantum cryptosystems. After some discussion with Lev I’m convinced the answer is almost certainly no; getting information about quantum states takes exponentially much work and lots of averaging, and works only in specific circumstances, so it’s easy for the designer to forestall. There is however a question around interdisciplinary proofs. Physicists have known about weak measurement since 1988 (even if few paid attention till a few years ago), yet no-one has rushed to tell the crypto community “Sorry, guys, when we said that nothing can break the Heisenberg bound, we kinda overlooked something.”

The second theme, emergent global correlation, may be of much more profound interest, to cryptographers and physicists alike.

Continue reading Emerging, fascinating, and disruptive views of quantum mechanics

87% of Android devices insecure because manufacturers fail to provide security updates

We are presenting a paper at SPSM next week that shows that, on average over the last four years, 87% of Android devices are vulnerable to attack by malicious apps. This is because manufacturers have not provided regular security updates. Some manufacturers are much better than others however, and our study shows that devices built by LG and Motorola, as well as those devices shipped under the Google Nexus brand are much better than most. Users, corporate buyers and regulators can find further details on manufacturer performance at

We used data collected by our Device Analyzer app, which is available from the Google Play Store. The app collects data from volunteers around the globe and we have used data from over 20,000 devices in our study. As always, we are keen to recruit more contributors! We combined Device Analyzer data with information we collected on critical vulnerabilities affecting Android. We used this to develop the FUM score which can be used to compare the security provided by different manufacturers. Each manufacturer is given a score out of 10 based on: f, the proportion of devices free from known critical vulnerabilities; u, the proportion of devices updated to the most recent version; and m, the mean number of vulnerabilities the manufacturer has not fixed on any device.

The problem with the lack of updates to Android devices is well known and recently Google and Samsung have committed to shipping security updates every month. Our hope is that by quantifying the problem we can help people when choosing a device and that this in turn will provide an incentive for other manufacturers and operators to deliver updates.

Google has done a good job at mitigating many of the risks, and we recommend users only install apps from Google’s Play Store since it performs additional safety checks on apps. Unfortunately Google can only do so much, and recent Android security problems have shown that this is not enough to protect users. Devices require updates from manufacturers, and the majority of devices aren’t getting them.

For further information, contact Daniel Thomas and Alastair Beresford via

Badness in the RIPE Database

The Cambridge Cloud Cybercrime Centre formally started work this week … but rather than writing about that I thought I’d document some publicly visible artefacts of improper behaviour (much of which, my experience tells me, is very likely to do with the sending of email spam).

RIPE is one of the five Regional Internet Registries (RIRs) and they have the responsibility of making allocations of IP address space to entities in Europe and the Middle East (ARIN deals with North America, APNIC with Asia and Australasia, LACNIC with Latin America and the Caribbean and AfriNIC with Africa).

Their public “WHOIS” databases documents these allocations and there are web interfaces to access them (for RIPE use

The RIPE Database also holds a number of other sets of data including a set of “routes”. Unfortunately some of those routes are prima facie evidence of people behaving badly.
Continue reading Badness in the RIPE Database

CHERI: Architectural support for the scalable implementation of the principle of least privilege

[CHERI tablet photo]
FPGA-based CHERI prototype tablet — a 64-bit RISC processor that boots CheriBSD, a CHERI-enhanced version of the FreeBSD operating system.
Only slightly overdue, this post is about our recent IEEE Security and Privacy 2015 paper, CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization. We’ve previously written about how our CHERI processor blends a conventional RISC ISA and processor pipeline design with a capability-system model to provide fine-grained memory protection within virtual address spaces (ISCA 2014, ASPLOS 2015). In our this new paper, we explore how CHERI’s capability-system features can be used to implement fine-grained and scalable application compartmentalisation: many (many) sandboxes within a single UNIX process — a far more efficient and programmer-friendly target for secure software than current architectures.

Continue reading CHERI: Architectural support for the scalable implementation of the principle of least privilege

Decepticon: interdisciplinary conference on deception research

I’m at Decepticon 2015 and will be liveblogging the talks in followups to this post. Up till now, research on deception has been spread around half a dozen different events, aimed at cognitive psychologists, forensic psychologists, law enforcement, cybercrime specialists and others. My colleague Sophie van der Zee decided to organise a single annual event to bring everyone together, and Decepticon is the the result. With over 160 registrants for the first edition of the event (and late registrants turned away) it certainly seems to have hit a sweet spot.

Award-winning case history of the health privacy scandal

Each year we divide our masters of public policy students into teams and get them to write case studies of public policy failures. The winning team this year wrote a case study of the fiasco. The UK government collected personal health information on tens of millions of people who had had hospital treatment in England and then sold it off to researchers, drug companies and even marketing firms, with only a token gesture of anonymisation. In practice patients were easy to identify. The resulting scandal stalled plans to centralise GP data as well, at least for a while.

Congratulations to Lizzie Presser, Maia Hruskova, Helen Rowbottom and Jesse Kancir, who tell the story of how mismanagement, conflicts and miscommunication led to a failure of patient privacy on an industrial scale, and discuss the lessons that might be learned. Their case study has just appeared today in Technology Science, a new open-access journal for people studying conflicts that arise between technology and society. LBT readers will recall several posts reporting the problem, but it’s great to have a proper, peer-reviewed case study that we can give to future generations of students. (Incidentally, the previous year’s winning case study was on a related topic, the failure of the NHS National Programme for IT.)