Open letter to Google

I am one of 38 researchers and academics (almost all of whom are far more important and famous than I will ever be!), who has signed an Open Letter to Google’s CEO, Eric Schmidt.

The letter, whose text is released today, calls upon Google to honour the important privacy promises it has made to its customers and protect users’ communications from theft and snooping by enabling industry standard transport encryption technology (HTTPS) for Google Mail, Docs, and Calendar.

Google already uses HTTPS for sign-in, but the options to make the whole of the session secure are hidden away where few people will ever find them.

Hence, at the moment pretty much everyone who uses a public WiFi connection to read their Gmail or edit a shared doc has no protection at all if any passing stranger decides to peek and see what they’re doing.

However, getting everyone to change their behaviour will take lots of explaining. Much simpler to have Google edit a couple of configuration files and flip a default the other way.

The letter goes into the issues in considerable detail (it’s eleven pages long with all the footnotes)… Eric Schmidt can hardly complain that we’ve failed to explain the issues to him !

Security and Human Behaviour 2009

I’m at SHB 2009, which brings security engineers together with psychologists, behavioral economists and others interested in deception, fraud, fearmongering, risk perception and how we make security systems more usable. Here is the agenda.

This workshop was first held last year, and most of us who attended reckoned it was the most exciting event we’d been to in some while. (I blogged SHB 2008 here.) In followups that will appear as comments to this post, I’ll be liveblogging SHB 2009.

How Privacy Fails: The Facebook Applications Debacle

I’ve been writing a lot about privacy in social networks, and sometimes the immediacy gets lost during the more theoretical debates. Recently though I’ve been investigating a massive privacy breach on Facebook’s application platform which serves as a sobering case study. Even to me, the extent of unauthorised data flow I found and the cold economic motivations keeping it going were surprising. Facebook’s application platform remains a disaster from a privacy standpoint, dampening one of the more compelling features of the network.

Continue reading How Privacy Fails: The Facebook Applications Debacle

Attack of the Zombie Photos

One of the defining features of Web 2.0 is user-uploaded content, specifically photos. I believe that photo-sharing has quietly been the killer application which has driven the mass adoption of social networks. Facebook alone hosts over 40 billion photos, over 200 per user, and receives over 25 million new photos each day. Hosting such a huge number of photos is an interesting engineering challenge. The dominant paradigm which has emerged is to host the main website from one server which handles user log-in and navigation, and host the images on separate special-purpose photo servers, usually on an external content-delivery network. The advantage is that the photo server is freed from maintaining any state. It simply serves its photos to any requester who knows the photo’s URL.

This setup combines the two classic forms of enforcing file permissions, access control lists and capabilities. The main website checks each request for a photo against an ACL, it then grants a capability to view a photo in the form of an obfuscated URL which can be sent to the photo-server. We wrote earlier about how it was possible to forge Facebook’s capability-URLs and gain unauthorised access to photos. Fortunately, this has been fixed and it appears that most sites use capability-URLs with enough randomness to be unforgeable. There’s another traditional problem with capability systems though: revocation. My colleagues Jonathan Anderson, Andrew Lewis, Frank Stajano and I ran a small experiment on 16 social-networking, blogging, and photo-sharing web sites and found that most failed to remove image files from their photo servers after they were deleted from the main web site. It’s often feared that once data is uploaded into “the cloud,” it’s impossible to tell how many backup copies may exist and where, and this provides clear proof that content delivery networks are a major problem for data remanence. Continue reading Attack of the Zombie Photos

Location privacy

I was recently asked for a brief (4-page) invited paper for a forthcoming special issue of the ACM SIGSPATIAL on privacy and security of location-based systems, so I wrote Foot-driven computing: our first glimpse of location privacy issues.

In 1989 at ORL we developed the Active Badge, the first indoor location system: an infrared transmitter worn by personnel that allowed you to tell which room the wearer was in. Every press and TV reporter who visited our lab worried about the intrusiveness of this technology; yet, today, all those people happily carry mobile phones through which they can be tracked anywhere they go. The significance of the Active Badge project was to give us a head start of a few years during which to think about location privacy before it affected hundreds of millions of people. (There is more on our early ubiquitous computing work at ORL in this free excerpt from my book.)
The ORL Active Badge

Location privacy is a hard problem to solve, first because ordinary people don’t seem to actually care, and second because there is a misalignment of incentives: those who could do the most to address the problem are the least affected and the least concerned about it. But we have a responsibility to address it, in the same way that designers of new vehicles have a responsibility to address the pollution and energy consumption issue.

Security economics video

Here is a video of a talk I gave at DMU on security economics (and the slides). I’ve given variants of this survey talk at various conferences over the past two or three years; at last one of them recorded the talk and put the video online. There’s also a survey paper that covers much of the same material. If you find this interesting, you might enjoy coming along to WEIS (the Workshop on the Economics of Information Security) on June 24-25.

Reducing interruptions with screentimelock

Sometimes I find that I need to concentrate, but there are too many distractions. Emails, IRC, and Twitter are very useful, but also create interruptions. For some types of task this is not a problem, but for others the time it takes to get back to being productive after an interruption is substantial. Or sometimes there is an imminent and important deadline and it is desirable to avoid being sidetracked.

Self-discipline is one approach for these situations, but sometimes it’s not enough. So I wrote a simple Python script — screentimelock — for screen which locks the terminal for a period of time. I don’t need to use this often, but since my email, IRC, and Twitter clients all reside in a screen session, I find it works well for me,

The script is started by screen’s lockscreen command, which is by default invoked by Ctrl-A X. Then, the screen will be cleared, which is helpful as often I find that just seeing the email subject lines is enough to act as a distraction. The screen will remain cleared and the terminal locked, until the next hour (e.g. if the script is activated at 7:15, it will unlock at 8:00).

It is of course possible to bypass the lock. Ctrl-C is ignored, but logging in from a different location and either killing the script or re-attaching the screen will work. Still, this is far more effort than glancing at the terminal, so I find the speed-bump screentimelock provides is enough to avoid temptation.

I’m releasing this software, under the BSD license, in the hope that other people find it useful. The download link, installation instructions and configuration parameters can be found on the screentimelock homepage. Any comments would be appreciated, but despite Zawinski’s Law, this program will not be extended to support reading mail 🙂

Temporal Correlations between Spam and Phishing Websites

Richard Clayton and I have been studying phishing website take-down for some time. We monitored the availability of phishing websites, finding that while most phishing websites are removed with a day or two, a substantial minority remain for much longer. We later found that one of the main reasons why so many websites slip through the cracks is that the take-down companies responsible for removal refuse to share their URL lists with each other.

One nagging question remained, however. Do long-lived phishing websites cause any harm? Would removing them actually help? To get that answer, we had to bring together data on the timing of phishing spam transmission (generously shared by Cisco IronPort) with our existing data on phishing website lifetimes. In our paper co-authored with Henry Stern and presented this week at the USENIX LEET Workshop in Boston, we describe how a substantial portion of long-lived phishing websites continue to receive new spam until the website is removed. For instance, fresh spam continues to be sent out for 75% of phishing websites alive after one week, attracting new victims. Furthermore, around 60% of phishing websites still alive after a month keep receiving spam advertisements.

Consequently, removal of websites by the banks (and the specialist take-down companies they hire) is important. Even when the sites stay up for some time, there is value in continued efforts to get them removed, because this will limit the damage.

However, as we have pointed out before, the take-down companies cause considerable damage by their continuing refusal to share data on phishing attacks with each other, despite our proposals addressing their competitive concerns. Our (rough) estimate of the financial harm due to longer-lived phishing websites was $330 million per year. Given this new evidence of persistent spam campaigns, we are now more confident of this measure of harm.

There are other interesting insights discussed in our new paper. For instance, phishing attacks can be broken down into two main categories: ordinary phishing hosted on compromised web servers and fast-flux phishing hosted on a botnet infrastructure. It turns out that fast-flux phishing spam is more tightly correlated with the uptime of the associated phishing host. Most spam is sent out around the time the fast-flux website first appears and stops once the website is removed. For phishing websites hosted on compromised web servers, there is much greater variation between the time a website appears and when the spam is sent. Furthermore, fast-flux phishing spam was 68% of the total email spam detected by IronPort, despite this being only 3% of all the websites.

So there seems to be a cottage industry of fairly disorganized phishing attacks, with perhaps a few hundred people involved. Each compromises a small number of websites, while sending a small amount of spam. Conversely there are a small number of organized gangs who use botnets for hosting, send most of the spam, and are extremely efficient on every measure we consider. We understand that the police are concentrating their efforts on the second set of criminals. This appears to be a sound decision.

The Curtain Opens on Facebook's Democracy Theatre

Last month we penned a highly-critical report of Facebook’s proposed terms of service and much-hyped “public review” process. We categorised them as “democracy theatre”, a publicity stunt intended to provide the appearance of community input without committing to real change. We included our report in Facebook’s official forum, and it was backed by the Open Rights Group as their official expert response as requested by Facebook. Last night, Facebook published their revised terms of service and unveiled their voting process, and our scepticism about the process has been confirmed. We’ve issued a press release summarising our opposition to the new terms.

Taking a look at the diff output from the revised terms, it’s clear that as we anticipated, no meaningful changes were made. All of the changes are superficial, in fact Section 2 is now slightly less clear and a few more shady definitions have been pushed to the back of the document. Facebook received hundreds of comments in addition to our report during the public review process, but their main response was a patronising FAQ document which dismissed user’s concerns as being merely misunderstandings of Facebook’s goodwill. Yet, Facebook still described their new terms as “reflecting comments from users and experts received during the 30-day comment period. ” We would challenge Facebook to point to a single revision which reflected a specific comment received.

The voting process is also problematic, as we predicted it would be. The new terms were announced and instantly put to a 7-day vote, hardly enough time to have a serious debate on the revised terms. Depending on your profile settings it can be quite hard to even find the voting interface. For some profiles it is prominently shown on one’s home page, for others it is hidden and can’t even be found through search. The voting interface was outsourced to a third-party developer called Wildfire Promotion Builder and has been frequently crashing in the first 12 hours of voting, despite a relatively low turnout (50,000 votes so far). This is particularly damning since the required quorum is 60 million votes over 7 days, meaning Facebook was unprepared technically to handle 1% of the required voting traffic.

The poorly done voting interface summarises the situation well. This process was never about democracy or openness, but about damage control from a major PR disaster. Truly opening the site up to user control is an interesting option and might be in Facebook’s long-term interest. They are also certainly within their rights as well to run their site as a dictatorship using the older, corporate-drafted terms of service. But it’s tough to swallow Facebook’s arrogant insistence that it’s engaging users, when it’s really doing no such thing.

Update, 24/04/2009: The vote ended yesterday. About 600,000 users voted, 0.3% of all users on the site and less than 1% of the required 30%. Over 25% of voters opposed the new terms of service, many of which can be interpreted as voting in protest. For Facebook, it was still a win, as they experienced mostly good press and have now had their new terms ratified.

Chip and PIN on Trial

The trial of Job v Halifax plc has been set down for April 30th at 1030 in the Nottingham County Court, 60 Canal Street, Nottingham NG1 7EJ. Alain Job is an immigrant from the Cameroon who has had the courage to sue his bank over phantom withdrawals from his account. The bank refused to refund the money, making the usual claim that its systems were secure. There’s a blog post on the cavalier way in which the Ombudsman dealt with his case. Alain’s case was covered briefly in Guardian in the run-up to a previous hearing; see also reports in Finextra here, here and (especially) here.

The trial should be interesting and I hope it’s widely reported. Whatever the outcome, it may have a significant effect on consumer protection in the UK. For years, financial regulators have been just as credulous about the banks’ claims to be in control of their information-security risk management as they were about the similar claims made in respect of their credit risk management (see our blog post on the ombudsman for more). It’s not clear how regulatory capture will (or can) be fixed in respect of credit risk, but it is just possible that a court could fix the consumer side of things. (This happened in the USA with the Judd case, as described in our submission to the review of the ombudsman service — see p 13.)

For further background reading, see blog posts on the technical failures of chip and PIN, the Jane Badger case, the McGaughey case and the failures of fraud reporting. Go back into the 1990s and we find the Halifax again as the complainant in R v Munden; John Munden was prosecuted for attempted fraud after complaining about phantom withdrawals. The Halifax couldn’t produce any evidence and he was acquitted.