Category Archives: Security economics

Social-science angles of security

Report on the IP Bill

This morning at 0930 the Joint Committee on the IP Bill is launching its report. As one of the witnesses who appeared before it, I got an embargoed copy yesterday.

The report s deeply disappointing; even that of the Intelligence and Security Committee (whom we tended to dismiss as government catspaws) is more vigorous. The MPs and peers on the Joint Committee have given the spooks all they wanted, while recommending tweaks and polishes here and there to some of the more obvious hooks and sharp edges.

The committee supports comms data retention, despite acknowledging that multiple courts have found this contrary to EU and human-rights law, and the fact that there are cases in the pipeline. It supports extending retention from big telcos offering a public service to private operators and even coffee shops. It support greatly extending comms data to ICRs; although it does call for more clarity on the definition, it give the Home Office lots of wriggle room by saying that a clear definition is hard if you want to catch all the things that bad people might do in the future. (Presumably a coffee shop served with an ICR order will have no choice but to install a government-approved black box. or just pipe everything to Cheltenham.) It welcomes the government decision to build and operate a request filter – essentially the comms database for which the Home Office has been trying to get parliamentary approval since the days of Jacqui Smith (and which Snowden told us they just built anyway). It comes up with the rather startling justification that this will help privacy as the police may have access to less stuff (though of course the spooks, including our 5eyes partners and others, will have more). It wants end-to-end encrypted stuff to be made available unless it’s “not practicable to do so”, which presumably means that the Home Secretary can order Apple to add her public key quietly to your keyring to get at your Facetime video chats. That has been a key goal of the FBI in Crypto War 2; a Home Office witness openly acknowledged it.

The comparison with the USA is stark. There, all three branches of government realised they’d gone too far after Snowden. President Obama set up the NSA review group, and implemented most of its recommendations by executive order; the judiciary made changes to the procedures of the FISA Court; and Congress failed to renew the data retention provisions in the Patriot Act (aided by the judiciary). Yet here in Britain the response is just to take Henry VIII powers to legalise all the illegal things that GCHQ had been up to, and hope that the European courts won’t strike the law down yet again.

People concerned for freedom and privacy will just have to hope the contrary. The net effect of the minor amendments proposed by the joint committee will be to make it even harder to get any meaningful amendments as the Bill makes its way through Parliament, and we’ll end up having to rely on the European courts to trim it back.

For more, see Scrambling for Safety, a conference we held last month in London on the bill and whose video is now online, and last week’s Cambridge symposium for a more detailed analysis.

Can we crowdsource trust?

Your browser contains a few hundred root certificates. Many of them were put there by governments; two (Verisign and Comodo) are there because so many merchants trust them that they’ve become ‘too big to fail’. This is a bit like where people buy the platform with the most software – a pattern of behaviour that let IBM and then Microsoft dominate our industry in turn. But this is not how trust should work; it leads to many failures, some of them invisible.

What’s missing is a mechanism where trust derives from users, rather than from vendors, merchants or states. After all, the power of a religion stems from the people who believe in it, not from the government. Entities with godlike powers that are foisted on us by others and can work silently against us are not gods, but demons. What can we do to exorcise them?

Do You Believe in Tinker Bell? The Social Externalities of Trust explores how we can crowdsource trust. Tor bridges help censorship victims access the Internet freely, and there are not enough of them. We want to motivate lots of people to provide them, and the best providers are simply those who help the most victims. So trust should flow from the support of the users, and it should be hard for powerful third parties to pervert. Perhaps a useful mascot is Tinker Bell, the fairy in Peter Pan, whose power waxes and wanes with the number of children who believe in her.

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.

Ongoing badness in the RIPE database

A month ago I wrote about the presence of route objects for undelegated IPv4 address space within the RIPE database (strictly I should say RIPE NCC — the body who looks after this database).

The folks at RIPE NCC removed a number of these dubious route objects which had been entered by AS204224.

And they were put straight back again!

This continues to this day — it looks to me as if once the RIPE NCC staff go home for the evening the route objects are resurrected.

So for AS204224 (CJSC Mashzavod-Marketing-Servis) you can (at the moment of writing) find route objects for four /19s and two /21s which have a creation times between 17:53 and 17:55 this evening (2 November). This afternoon (in RIPE NCC working hours) there were no such route objects.

As an aside: as well as AS204224 I see route objects for undelegated space (these are all more recent than my original blog article) from:

    AS200439 LLC Stadis, Ekaterinburg, Russia
    AS204135 LLC Transmir, Blagoveshensk, Russia
    AS204211 LLC Aspect, Novgorod, Russia

I’d like to give a detailed account of the creation and deletion of the AS204224 route objects, but I don’t believe that there’s a public archive of RIPE database snapshots (you can find the latest snapshot taken at about 03:45 each morning at ftp://ftp.ripe.net/ripe/dbase, but if you don’t download it that day then it’s gone!).

However, I have been collecting copies of the database for the past few days and the creation times for the route objects are:

    Thu 2015-10-29  18:03
    Fri 2015-10-30  15:01
    Sat 2015-10-31  17:54
    Sun 2015-11-01  18:31
    Mon 2015-11-02  17:53

There are two conclusions to draw from this: perhaps the AS204224 people only come out at night and dutifully delete their route objects when the sun rises before repeating the activity the following night (sounds like one of Grimm’s fairy tales doesn’t it?).

The alternative, less magical explanation, is that the staff at RIPE NCC are playing “whack-a-mole” INSIDE THEIR OWN DATABASE! (and although they work weekends, they go home early on Friday afternoons!)

87% of Android devices insecure because manufacturers fail to provide security updates

We are presenting a paper at SPSM next week that shows that, on average over the last four years, 87% of Android devices are vulnerable to attack by malicious apps. This is because manufacturers have not provided regular security updates. Some manufacturers are much better than others however, and our study shows that devices built by LG and Motorola, as well as those devices shipped under the Google Nexus brand are much better than most. Users, corporate buyers and regulators can find further details on manufacturer performance at AndroidVulnerabilities.org

We used data collected by our Device Analyzer app, which is available from the Google Play Store. The app collects data from volunteers around the globe and we have used data from over 20,000 devices in our study. As always, we are keen to recruit more contributors! We combined Device Analyzer data with information we collected on critical vulnerabilities affecting Android. We used this to develop the FUM score which can be used to compare the security provided by different manufacturers. Each manufacturer is given a score out of 10 based on: f, the proportion of devices free from known critical vulnerabilities; u, the proportion of devices updated to the most recent version; and m, the mean number of vulnerabilities the manufacturer has not fixed on any device.

The problem with the lack of updates to Android devices is well known and recently Google and Samsung have committed to shipping security updates every month. Our hope is that by quantifying the problem we can help people when choosing a device and that this in turn will provide an incentive for other manufacturers and operators to deliver updates.

Google has done a good job at mitigating many of the risks, and we recommend users only install apps from Google’s Play Store since it performs additional safety checks on apps. Unfortunately Google can only do so much, and recent Android security problems have shown that this is not enough to protect users. Devices require updates from manufacturers, and the majority of devices aren’t getting them.

For further information, contact Daniel Thomas and Alastair Beresford via contact@androidvulnerabilities.org

Badness in the RIPE Database

The Cambridge Cloud Cybercrime Centre formally started work this week … but rather than writing about that I thought I’d document some publicly visible artefacts of improper behaviour (much of which, my experience tells me, is very likely to do with the sending of email spam).

RIPE is one of the five Regional Internet Registries (RIRs) and they have the responsibility of making allocations of IP address space to entities in Europe and the Middle East (ARIN deals with North America, APNIC with Asia and Australasia, LACNIC with Latin America and the Caribbean and AfriNIC with Africa).

Their public “WHOIS” databases documents these allocations and there are web interfaces to access them (for RIPE use https://apps.db.ripe.net/search/query.html).

The RIPE Database also holds a number of other sets of data including a set of “routes”. Unfortunately some of those routes are prima facie evidence of people behaving badly.
Continue reading Badness in the RIPE Database

FCA view on unauthorised transactions

Yesterday the Financial Conduct Authority (the UK bank regulator) issued a report on Fair treatment for consumers who suffer unauthorised transactions. This is an issue in which we have an interest, as fraud victims regularly come to us after being turned away by their bank and by the financial ombudsman service. Yet the FCA have found that everything is hunky dory, and conclude “we do not believe that further thematic work is required at this stage”.

One of the things the FCA asked their consultants is whether there’s any evidence that claims are rejected on the sole basis that a pin was used. The consultants didn’t want to reply on existing work but instead surveyed a nationally representative sample of 948 people and found that 16% had a transaction dispute in the last year. These were 37% MOTO, 22% cancelled future dated payment, 15% ATM cash, 13% shop, 13% lump sum from bank account. Of customers who complained, 43% were offered their money back spontaneously; a further 41% asked; in the end a total of 68% got refunds after varying periods of time. In total 7% (15 victims) had claim declined, most because the bank said the transaction was “authorised” or following a”contract with merchant” and 2 for chip and pin (one of them an ATM transaction; the other admitted sharing their PIN). 12 of these 15 considered the result
unfair. These figures are entirely consistent with what we learn from the British Crime Survey and elsewhere; two million UK victims a year, and while most get their money back, many don’t; and a hard core of perhaps a few tens of thousands who end up feeling that their bank has screwed them.

The case studies profiled in the consultants’ paper were of glowing happy people who got their money back; the 12 sad losers were not profiled, and the consultants concluded that “Customers might be being denied refunds on the sole basis that Chip and PIN were used … we found little evidence of this” (p 49) and went on to remark helpfully that some customers admitted sharing their PINs and felt OK lying about this. The FCA happily paraphrases this as “We also did not find any evidence of firms holding customers liable for unauthorised transactions solely on the basis that the PIN was used to make the transaction” (main report, p 13, 3.25).

According to recent news reports, the former head of the FCA, Martin Wheatley, was sacked by George Osborne for being too harsh on the banks.

Job Ads: Cloud Cybercrime Centre

The Cambridge Cloud Cybercrime Centre (more information about our vision for this brand new initiative are in this earlier article) now has a number of Research Associate / Research Assistant positions to fill:

  • A person to take responsibility for improving the automated processing of our incoming data feeds. They will help develop new sources of data, add new value to existing data and develop new ways of understanding and measuring cybercrime: full details are here.
  • A person with a legal background to carry out research into the legal and policy aspects of cybercrime data sharing. Besides contributing to the academic literature and to the active policy debates in this area they will assist in negotiating relevant arrangements with data suppliers and users: full details are here.

and with special thanks for the generosity of ThreatSTOP, who have funded this extra position:

  • We also seek someone to work on distributed denial-of-service (DDoS) measurement. We have been gathering data on reflected UDP DDoS events for many months and we want to extend our coverage and develop a much more detailed analysis of the location of perpetrators and victims along with real-time datafeeds of relevant information to assist in reducing harm. Full details are here.

Please follow the links to read the relevant formal advertisement for the details about exactly who and what we’re looking for and how to apply.