Category Archives: Security economics

Social-science angles of security

Debunking cybercrime myths

Our paper Measuring the Cost of Cybercrime sets out to debunk the scaremongering around online crime that governments and defence contractors are using to justify everything from increased surveillance to preparations for cyberwar. It will appear at the Workshop on the Economics of Information Security later this month. There’s also some press coverage.

Last year the Cabinet Office published a report by Detica claiming that cybercrime cost the UK £27bn a year. This was greeted with derision, whereupon the Ministry of Defence’s chief scientific adviser, Mark Welland, asked us whether we could come up with some more defensible numbers.

We assembled a team of experts and collated what’s known. We came up with a number of interesting conclusions. For example, we compared the direct costs of cybercrimes (the amount stolen) with the indirect costs (costs in anticipation, such as countermeasures, and costs in consequence such as paying compensation). With traditional crimes that are now classed as “cyber” as they’re done online, such as welfare fraud, the indirect costs are much less than the direct ones; while for “pure”cybercrimes that didn’t exist before (such as fake antivirus software) the indirect costs are much greater. As a striking example, the botnet behind a third of the spam in 2010 earned its owner about $2.7m while the worldwide costs of fighting spam were around $1bn.

Some of the reasons for this are already well-known; traditional crimes tend to be local, while the more modern cybercrimes tend to be global and have strong externalities. As for what should be done, our research suggests we should perhaps spend less on technical countermeasures and more on locking up the bad guys. Rather than giving most of its cybersecurity budget to GCHQ, the government should improve the police’s cybercrime and forensics capabilities, and back this up with stronger consumer protection.

I'm from the Government and I'm here to help

Two years ago, Hyoungshick Kim, Jun Ho Huh and I wrote a paper On the Security of Internet banking in South Korea in which we discussed an IT security policy that had gone horribly wrong. The Government of Korea had tried in 1998 to secure electronic commerce by getting all the banks to use an officially-approved AciveX plugin, effectively locking most Koreans into IE. We argued in 2010 that this provided less security than it seemed, and imposed high usability and compatibility costs. Hyoungshick presented our paper at a special conference, and the government withdrew the ActiveX mandate.

It’s now apparent that the problem is still there. The bureaucracy created a procedure to approve alternative technologies, and (surprise) still hasn’t approved any. Korean web businesses remain trapped in the bubble, and fall farther and farther behind. This may well come to be seen as a warning to other governments to adopt true open standards, if they want to avoid a similar fate. The Cabinet Office should take note – and don’t forget to respond to their consultation!

Risk and privacy in payment systems

I’ve just given a talk on Risk and privacy implications of consumer payment innovation (slides) at the Federal Reserve Bank’s payments conference. There are many more attendees this year; who’d have believed that payment systems would ever become sexy? Yet there’s a lot of innovation, and regulators are starting to wonder. Payment systems now contain many non-bank players, from insiders like First Data, FICO and Experian to service firms like PayPal and Google. I describe a number of competitive developments and argue that although fraud may increase, so will welfare, so there’s no reason to panic. For now, bank supervisors should work on collecting better fraud statistics, so that if there ever is a crisis the response can be well-informed.

Three Paper Thursday: BGP and its security

BGP security was a hot topic a few years ago, but is not as much studied these years. However, with technologies such as IPv6 and DNSSEC, BGP security is making a comeback, especially in the industry. We academics also have much to contribute in this space. In today’s Three Paper Thursday, I will highlight three recent work related to BGP security. It is also a good starting point to catch up in BGP security for those whose last memories of BGP security involve proposals such as S-BGP and SoBGP.

Privacy economics: evidence from the field

It has been argued that privacy is the new currency on the Web. Services offered for free are actually paid for using personal information, which is then turned into money (e.g., using targeted advertising). But what is the exchange rate for privacy? In the largest experiment ever and the first done in field, we shed new light on consumers’ willingness to pay for added privacy.

One in three Web shoppers pay half a euro extra for keeping their mobile phone number private. If privacy comes for free, more than 80% of consumers choose the company that collects less personal information, our study concludes.

Continue reading Privacy economics: evidence from the field

Social authentication – harder than it looks!

This is the title of a paper we’ll be presenting next week at the Financial Crypto conference (slides). There is also coverage in the New Scientist.

Facebook has a social authentication mechanism where you may be asked to recognise some of your friends from photos as part of the login process. We analysed this and found it to be vulnerable to guessing by your friends, and also to modern face-recognition systems. Most people want privacy only from those close to them; if you’re having an affair then you want your partner to not find out but you don’t care if someone in Mongolia learns about it. And if your partner finds out and becomes your ex, then you don’t want them to be able to cause havoc on your account. Celebrities are similar, except that everyone is their friend (and potentially their enemy).

Second, if someone outside your circle of friends is doing a targeted attack on you, then by friending your friends they can get some access to your social circle to collect photos, which they might use in image-recognition software or even manually to pass the test.
Continue reading Social authentication – harder than it looks!

Beware of cybercrime data memes

Last year when I wrote a paper about mitigating malware I needed some figures on the percent of machines infected with malware. There are a range of figures, mainly below 10%, but one of the highest was 25%.

I looked into why this occurred and wrote it up in footnote #9 (yes, it’s a paper with a lot of footnotes!). My explanation was:

The 2008 OECD report on Malware [14] contained the sentence “Furthermore, it is estimated that 59 million users in the US have spyware or other types of malware on their computers.” News outlets picked up on this, e.g. The Sydney Morning Herald [20] who divided the 59 million figure into the US population, and then concluded that around a quarter of US computers were infected (assuming that each person owned one computer). The OECD published a correction in the online copy of the report a few days later. They were actually quoting PEW Internet research on adware/spyware (which is a subtly different threat) from 2005 (which was a while earlier than 2008). The sentence should have read “After hearing descriptions of ‘spyware’ and ‘adware’, 43% of internet users, or about 59 million American adults, say they have had one of these programs on their home computer.” Of such errors in understanding the meaning of data is misinformation made.

We may be about to have a similar thing happen with Facebook account compromises.
Continue reading Beware of cybercrime data memes

Bankers’ Christmas present

Every Christmas we give our friends in the banking industry a wee present. Sometimes it’s the responsible disclosure of a vulnerability, which we publish the following February: 2007’s was PED certification, 2008’s was CAP while in 2009 we told the banking industry of the No-PIN attack. This year too we have some goodies in the hamper: watch our papers at Financial Crypto 2012.

In other years, we’ve had arguments with the bankers’ PR wallahs. In 2010, for example, their trade association tried to censor one of our students’ thesis. That saga also continues; Britain’s bankers tried once more to threaten us so we told them once more to go away. We have other conversations in progress with bankers, most of them thankfully a bit more constructive.

This year’s Christmas present is different: it’s a tale with a happy ending. Eve Russell was a fraud victim whom Barclays initially blamed for her misfortune, as so often happens, and the Financial Ombudsman Service initially found for the bank as it routinely does. Yet this was clearly not right; after many lawyers’ letters, two hearings at the ombudsman, two articles in The Times and a TV appearance on Rip-off Britain, Eve won. This is the first complete case file since the ombudsman came under the Freedom of Information Act; by showing how the system works, it may be useful to fraud victims in the future.

(At Eve’s request, I removed the correspondence and case papers from my website on 5 Oct 2015. Eve was getting lots of calls and letters from other fraud victims and was finally getting weary. I have left just the article in the Times.)

Privacy event on Wednesday

I will be talking in London on Wednesday at a workshop on Anonymity, Privacy, and Open Data about the difficulty of anonymising medical records properly. I’ll be on a panel with Kieron O’Hara who wrote a report on open data for the Cabinet Office earlier this year, and a spokesman from the ICO.

This will be the first public event on the technology and policy issues surrounding anonymisation since yesterday’s announcement that the government will give wide access to anonymous versions of our medical records. I’ve written extensively on the subject: for an overview, see my book chapter which explores the security of medical systems in general from p 282 and the particular problems of using “anonymous” records in research from p 298. For the full Monty, start here.

Anonymity is hard enough if the data controller is capable, and motivated to try hard. In the case of the NHS, anonymity has always been perfunctory; the default is to remove patient names and addresses but leave their postcodes and dates of birth. This makes it easy to re-identify about 99% of patients (the exceptions are mostly twins, soldiers, students and prisoners). And since I wrote that book chapter, the predicted problems have come to pass; for example the NHS lost a laptop containing over eight million patients’ records.

Here we go again

The Sunday media have been trailing a speech by David Cameron tomorrow about giving us online access to our medical records and our kids’ school records, and making anonymised versions of them widely available to researchers, companies and others. Here is coverage in the BBC, the Mail and the Telegraph; there’s also a Cabinet Office paper. The measures are supported by the CEO of Glaxo and opposed by many NGOs.

If the Government is going to “ensure all NHS patients can access their personal GP records online by the end of this Parliament”, they’ll have to compel the thousands of GPs who still keep patient records on their own machines to transfer them to centrally-hosted facilities. The systems are maintained by people who have to please the Secretary of State rather than GPs, and thus become progressively less useful. This won’t just waste doctors’ time but will have real consequences for patient safety and the quality of care.

We’ve seen this repeatedly over the lifetime of NPfIT and its predecessor the NHS IM&T strategy. Officials who can’t develop working systems become envious of systems created by doctors; they wrest control, and the deterioration starts.

It’s astounding that a Conservative prime minister could get the idea that nationalising something is the best way to make it work better. It’s also astonishing that a Government containing Liberals who believe in human rights, the rule of law and privacy should support the centralisation of medical records a mere two years after the Joseph Rowntree Reform Trust, a Liberal charity, produced the Database State report which explained how the centralisation of medical records (and for that matter children’s records) destroys privacy and contravenes human-rights law. The coming debate will no doubt be vigorous and will draw on many aspects of information security, from the dreadful security usability (and safety usability) of centrally-purchased NHS systems, through the real hazards of coerced access by vulnerable patients, to the fact that anonymisation doesn’t really work. There’s much more here. Of course the new centralisation effort will probably fail, just like the last two; health informatics is a hard problem, and even Google gave up. But our privacy should not depend on the government being incompetent at wrongdoing. It should refrain from wrongdoing in the first place.