Category Archives: Security economics

Social-science angles of security

Economics of peer-to-peer systems

A new paper, Olson’s Paradox Revisited: An Empirical Analysis of File-Sharing Behaviour in P2P Communities, finds a positive correlation between the size of a BitTorrent file-sharing community and the amount of content shared, despite a reduced individual propensity to share in larger groups, and deduces from this that file-sharing communities provide a pure (non-rival) public good. Forcing users to upload results in a smaller catalogue; but private networks provide both more and better content, as do networks aimed at specialised communities.

George Danezis and I produced a theoretical model of this five years ago in The Economics of Censorship Resistance. It’s nice to see that the data, now collected, bear us out

The Economics of Privacy in Social Networks

We often think of social networking to Facebook, MySpace, and the also-rans, but in reality there are there are tons of social networks out there, dozens which have membership in the millions. Around the world it’s quite a competitive market. Sören Preibusch and I decided to study the whole ecosystem to analyse how free-market competition has shaped the privacy practices which I’ve been complaining about. We carefully examined 45 sites, collecting over 250 data points about each sites’ privacy policies, privacy controls, data collection practices, and more. The results were fascinating, as we presented this week at the WEIS conference in London. Our full paper and complete dataset are now available online as well.

We collected a lot of data, and there was a little bit of something for everybody. There was encouraging news for fans of globalisation, as we found the social networking concept popular across many cultures and languages, with the most popular sites being available in over 40 languages. There was an interesting finding from a business perspective that photo-sharing may be the killer application for social networks, as this features was promoted far more often than sharing videos, blogging, or playing games. Unfortunately the news was mostly negative from a privacy standpoint. We found some predictable but still surprising problems. Too much unnecessary data is collected by most sites, 90% requiring a full-name and DOB. Security practices are dreadful: no sites employed phishing countermeasures, and 80% of sites failed to protect password entry using TLS. Privacy policies were obfuscated and confusing, and almost half failed basic accessibility tests. Privacy controls were confusing and overwhelming, and profiles were almost universally left open by default.

The most interesting story we found though was how sites consistently hid any mention of privacy, until we visited the privacy policies where they provided paid privacy seals and strong reassurances about how important privacy is. We developed a novel economic explanation for this: sites appear to craft two different messages for two different populations. Most users care about privacy about privacy but don’t think about it in day-to-day life. Sites take care to avoid mentioning privacy to them, because even mentioning privacy positively will cause them to be more cautious about sharing data. This phenomenon is known as “privacy salience” and it makes sites tread very carefully around privacy, because users must be comfortable sharing data for the site to be fun. Instead of mentioning privacy, new users are shown a huge sample of other users posting fun pictures, which encourages them to  share as well. For privacy fundamentalists who go looking for privacy by reading the privacy policy, though, it is important to drum up privacy re-assurance.

The privacy fundamentalists of the world may be positively influencing privacy on major sites through their pressure. Indeed, the bigger, older, and more popular sites we studied had better privacy practices overall. But the desire to limit privacy salience is also a major problem because it prevents sites from providing clear information about their privacy practices. Most users therefore can’t tell what they’re getting in to, resulting in the predominance of poor-practices in this “privacy jungle.”

Security and Human Behaviour 2009

I’m at SHB 2009, which brings security engineers together with psychologists, behavioral economists and others interested in deception, fraud, fearmongering, risk perception and how we make security systems more usable. Here is the agenda.

This workshop was first held last year, and most of us who attended reckoned it was the most exciting event we’d been to in some while. (I blogged SHB 2008 here.) In followups that will appear as comments to this post, I’ll be liveblogging SHB 2009.

Location privacy

I was recently asked for a brief (4-page) invited paper for a forthcoming special issue of the ACM SIGSPATIAL on privacy and security of location-based systems, so I wrote Foot-driven computing: our first glimpse of location privacy issues.

In 1989 at ORL we developed the Active Badge, the first indoor location system: an infrared transmitter worn by personnel that allowed you to tell which room the wearer was in. Every press and TV reporter who visited our lab worried about the intrusiveness of this technology; yet, today, all those people happily carry mobile phones through which they can be tracked anywhere they go. The significance of the Active Badge project was to give us a head start of a few years during which to think about location privacy before it affected hundreds of millions of people. (There is more on our early ubiquitous computing work at ORL in this free excerpt from my book.)
The ORL Active Badge

Location privacy is a hard problem to solve, first because ordinary people don’t seem to actually care, and second because there is a misalignment of incentives: those who could do the most to address the problem are the least affected and the least concerned about it. But we have a responsibility to address it, in the same way that designers of new vehicles have a responsibility to address the pollution and energy consumption issue.

Security economics video

Here is a video of a talk I gave at DMU on security economics (and the slides). I’ve given variants of this survey talk at various conferences over the past two or three years; at last one of them recorded the talk and put the video online. There’s also a survey paper that covers much of the same material. If you find this interesting, you might enjoy coming along to WEIS (the Workshop on the Economics of Information Security) on June 24-25.

Temporal Correlations between Spam and Phishing Websites

Richard Clayton and I have been studying phishing website take-down for some time. We monitored the availability of phishing websites, finding that while most phishing websites are removed with a day or two, a substantial minority remain for much longer. We later found that one of the main reasons why so many websites slip through the cracks is that the take-down companies responsible for removal refuse to share their URL lists with each other.

One nagging question remained, however. Do long-lived phishing websites cause any harm? Would removing them actually help? To get that answer, we had to bring together data on the timing of phishing spam transmission (generously shared by Cisco IronPort) with our existing data on phishing website lifetimes. In our paper co-authored with Henry Stern and presented this week at the USENIX LEET Workshop in Boston, we describe how a substantial portion of long-lived phishing websites continue to receive new spam until the website is removed. For instance, fresh spam continues to be sent out for 75% of phishing websites alive after one week, attracting new victims. Furthermore, around 60% of phishing websites still alive after a month keep receiving spam advertisements.

Consequently, removal of websites by the banks (and the specialist take-down companies they hire) is important. Even when the sites stay up for some time, there is value in continued efforts to get them removed, because this will limit the damage.

However, as we have pointed out before, the take-down companies cause considerable damage by their continuing refusal to share data on phishing attacks with each other, despite our proposals addressing their competitive concerns. Our (rough) estimate of the financial harm due to longer-lived phishing websites was $330 million per year. Given this new evidence of persistent spam campaigns, we are now more confident of this measure of harm.

There are other interesting insights discussed in our new paper. For instance, phishing attacks can be broken down into two main categories: ordinary phishing hosted on compromised web servers and fast-flux phishing hosted on a botnet infrastructure. It turns out that fast-flux phishing spam is more tightly correlated with the uptime of the associated phishing host. Most spam is sent out around the time the fast-flux website first appears and stops once the website is removed. For phishing websites hosted on compromised web servers, there is much greater variation between the time a website appears and when the spam is sent. Furthermore, fast-flux phishing spam was 68% of the total email spam detected by IronPort, despite this being only 3% of all the websites.

So there seems to be a cottage industry of fairly disorganized phishing attacks, with perhaps a few hundred people involved. Each compromises a small number of websites, while sending a small amount of spam. Conversely there are a small number of organized gangs who use botnets for hosting, send most of the spam, and are extremely efficient on every measure we consider. We understand that the police are concentrating their efforts on the second set of criminals. This appears to be a sound decision.

Chip and PIN on Trial

The trial of Job v Halifax plc has been set down for April 30th at 1030 in the Nottingham County Court, 60 Canal Street, Nottingham NG1 7EJ. Alain Job is an immigrant from the Cameroon who has had the courage to sue his bank over phantom withdrawals from his account. The bank refused to refund the money, making the usual claim that its systems were secure. There’s a blog post on the cavalier way in which the Ombudsman dealt with his case. Alain’s case was covered briefly in Guardian in the run-up to a previous hearing; see also reports in Finextra here, here and (especially) here.

The trial should be interesting and I hope it’s widely reported. Whatever the outcome, it may have a significant effect on consumer protection in the UK. For years, financial regulators have been just as credulous about the banks’ claims to be in control of their information-security risk management as they were about the similar claims made in respect of their credit risk management (see our blog post on the ombudsman for more). It’s not clear how regulatory capture will (or can) be fixed in respect of credit risk, but it is just possible that a court could fix the consumer side of things. (This happened in the USA with the Judd case, as described in our submission to the review of the ombudsman service — see p 13.)

For further background reading, see blog posts on the technical failures of chip and PIN, the Jane Badger case, the McGaughey case and the failures of fraud reporting. Go back into the 1990s and we find the Halifax again as the complainant in R v Munden; John Munden was prosecuted for attempted fraud after complaining about phantom withdrawals. The Halifax couldn’t produce any evidence and he was acquitted.

The Snooping Dragon

There’s been much interest today in a report that Shishir Nagaraja and I wrote on Chinese surveillance of the Tibetan movement. In September last year, Shishir spent some time cleaning out Chinese malware from the computers of the Dalai Lama’s private office in Dharamsala, and what we learned was somewhat disturbing.

Later, colleagues from the University of Toronto followed through by hacking into one of the control servers Shishir identified (something we couldn’t do here because of the Computer Misuse Act); their report relates how the attackers had controlled malware on hundreds of other PCs, many in government agencies of countries such as India, Vietnam and the Phillippines, but also in US firms such as AP and Deloittes.

The story broke today in the New York Times; see also coverage in the Telegraph, the BBC, CNN, the Times of India, AP, InfoWorld, Wired and the Wall Street Journal.

Democracy Theatre on Facebook

You may remember a big PR flap last month about Facebook‘s terms of service, followed by Facebook backing down and promising to involve users in a self-governing process of drafting their future terms. This is an interesting step with little precedent amongst commercial web sites. Facebook now has enough users to be the fifth largest nation on earth (recently passing Brazil), and operators of such immense online societies need to define a cyber-government which satisfies their users while operating lawfully within a multitude of jurisdictional boundaries, as well as meeting their legal obligations to the shareholders who own the company.

Democracy is an intriguing approach, and it is encouraging that Facebook is considering this path. Unfortunately, after some review my colleagues and I are left thoroughly disappointed by both the new documents and the specious democratic process surrounding them. We’ve outlined our arguments in a detailed report, the official deadline for commentary is midnight tonight.

The non-legally binding Statement of Principles outline an admirable set of goals in plain language, which was refreshing. However, these goals are then undermined for a variety of legal and business reasons by the “Statement of Rights and Responsibilities“, which would effectively be the new Terms of Service. For example, Facebook demands that application developers comply with user’s privacy settings which it doesn’t provide access to, states that users should have “programmatic access” and then bans users from interacting with the site via “automated means,” and states that the service will transcend national boundaries while banning users from signing up if they live in a country embargoed by the United States.

The stated goal of fairness and equality is also lost. The Statement of Rights and Responsibilities primarily assigns rights to Facebook and responsibilities on users, developers, and advertisers. Facebook still demands a broad license to all user content, shifts all responsibility for enforcing privacy onto developers, and sneakily disclaims itself of all liability. Yet it demands an unrealistic set of obligations: a literal reading of the document requires users to get explicit permission from other users before viewing their content. Furthermore, they have applied the banking industry’s well-known trick of shifting liability to customers, binding users to not do anything to “jeopardize the security of their account,” which can be used to dissolve the contract.

The biggest missed opportunity, however, is the utter failure to provide a real democratic process as promised. Users are free to comment on terms, but Facebook is under no obligation to listen. Facebook‘s official group for comments contains a disorganised jumble of thousands of comments, some insightful and many inane. It is difficult to extract intelligent analysis here. Under certain conditions a vote can be called, but this is hopelessly weakened: it only applies to certain types of changes, the conditions of the vote are poorly specified and subject to manipulation by Facebook, and in fact they reserve the right to ignore the vote for “administrative reasons.”

With a nod to Bruce Schneier, we call such steps “democracy theatre.” It seems the goal is not to actually turn governance over to users, but to use the appearance of democracy and user involvement to ward off future criticism. Our term may be new, but this trick is not, it has been used by autocratic regimes around the world for decades.

Facebook’s new terms represent a genuine step forward with improved clarity in certain areas, but an even larger step backward in using democracy theatre to cover the fact that Facebook is a business and its ultimate accountability is to its shareholders. The outrage over the previous terms was real and it was justified, social networks mean a great deal to their users, and they want to have a real say.  Since Facebook appears unwilling to actually do so, though, we would be remiss to allow them to deflect user’s anger with flowery language and a sham democratic process. For this reason we cannot support the new terms.

[UPDATE: Our report has been officially backed by the Open Rights Group]