Facebook tosses graph privacy into the bin

Facebook has been rolling out new privacy settings in the past 24 hours along with a “privacy transition” tool that is supposed to help users update their settings.  Ostensibly, Facebook’s changes are the result of pressure from the Canadian privacy commissioner, and in Facebook’s own words the changes are meant to be “new tools to control your experience.” The changes have been harshly criticized in a number of high-profile places:  the New York Times, Wired, CnetTechCrunch, Valleywag, ReadWriteWeb, and by the the EFF and the ACLU. The ACLU has the most detailed technical summary of changes, essentially there are more granular controls but many more things will default to “open to everyone.” It’s most telling to check the blogs used by Facebook developers and marketers with a business interest in the matter. Their take is simple: a lot more information is about to be shared and developers need to find out how to use it.

The most discussed issue is the automatic change to more open-settings, which will lead to privacy breaches of the socially-awkward variety, as users will accidentally post something that the wrong person can read. This will assuredly happen more frequently as a direct result of these changes, even though Facebook is trying to force users to read about the new settings, it’s a safe bet that users won’t read any of it. Many people learn how Facebook works by experience, they expect it to keep working that way and it’s a bad precedent to change that when it’s not necessary. The fact that Facebook’s “transition wizard” includes one column of radio buttons for “keep my old settings” and a pre-selected column for “switch to the new settings Facebook wants me to have” shows that either they don’t get it or they really don’t respect their users. Most of this isn’t surprising though: I wrote in June that Facebook would be automatically changing user settings to be more open, TechCrunch also saw this coming in July.

There’s a much more surprising bit which has been mostly overlooked-it’s now impossible for any user to hide their friend list from being globally viewable to the Internet at large. Facebook has a few shameful cop-out statements about this, stating that you can remove it from your default profile view if you wish, but since (in their opinion) it’s “publicly available information”  you can’t hide it from people who really want to see it. It has never worked this way previously, as hiding one’s friend list was always an option, and there have been many research papers, including a few by me and colleagues in Cambridge, concluding that the social graph is actually the most important information to keep private. The threats here are more fundamental and dangerous-unexpected inference of sensitive information, cross-network de-anonymisation, socially targeted phishing and scams.

It’s incredibly disappointing to see Facebook ignoring a growing body of scientific evidence and putting its social graph up for grabs. It will likely be completely crawled fairly soon by professional data aggregators, and probably by enterprising researchers soon after. The social graph is powerful view into who we are—Mark Zuckerberg said so himself—and  it’s a sad day to see Facebook cynically telling us we can’t decide for ourselves whether or not to share it.

UPDATE 2009-12-11: Less than 12 hours after publishing this post, Facebook backed down citing criticism and made it possible to hide one’s friend list. They’ve done this in a laughably ham-handed way, as friend-list visibility is now all-or-nothing while you can set complex ACLs on most other profile items. It’s still bizarre that they’ve messed with this at all, for years the default was in fact to only show your friend list to other friends. One can only conclude that they really want all users sharing their friend list, while trying to appear privacy-concerned: this is precisely the “privacy communication game” which Sören Preibusch and I wrote of in June. This remains an ignoble moment for Facebook-the social graph will still become mostly public as they’ll be changing overnight the visibility of hundreds of millions of users’ friends lists who don’t find this well-hidden opt-out.

What does Detica detect?

There has been considerable interest in a recent announcement by Detica of “CView” which their press release claims is “a powerful tool to measure copyright infringement on the internet”. The press release continues by saying that it will provide “a measure of the total volume of unauthorised file sharing”.

Commentators have divided as to whether these claims are nonsense, or whether the system must be deeply intrusive. The main reason for this is that when peer-to-peer file sharing flows are encrypted, it is impossible for a passive observer to know what is being transferred.

I met with Detica last Friday, at their suggestion, to discuss what their system actually did (they’ve read some of my work on Phorm’s system, so meeting me was probably not entirely random). With their permission, I can now explain the basics of what they are actually doing. A more detailed account should appear at some later date.
Continue reading What does Detica detect?

RIP memes

There was a discussion a little while back on the UKCrypto mailing list about how the UK Regulation of Investigatory Powers Act came to be so specifically associated in the media with terrorism, when it is far more general than that ( see for example: “Anti-terrorism laws used to spy on noisy children” ).

I suggested that this “meme” might well be traced back to the Home Office website’s quick overview text which used to say (presumably before they thought better of it):

The Regulation of Investigatory Powers Act (RIPA) legislates for using various methods of surveillance and information gathering for the prevention of crime including terrorism.

Well, I’ve just noticed another source of memes (which may be new, since Google are continually experimenting with their system. or which may have been there for simply ages, unnoticed by me at least).
Continue reading RIP memes

How to vote anonymously under ubiquitous surveillance

In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.

This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?

The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.

It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.

In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.

With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.

A preprint of the paper is available here, and the slides here.

The Real Hustle and the psychology of scam victims

This, which started as a contribution to Ross’s Security and Psychology initiative, is probably my most entertaining piece of research this year and it’s certainly getting its bit of attention.

I’ve been a great fan of The Real Hustle since 2006, which I recommend to anyone with an interest in security, and it has been good fun to work with the TV show’s coauthor Paul Wilson on this paper. We analyze the scams reproduced in the show, we extract general principles from them that describe typical behavioural patterns exploited by hustlers and then we show how an awareness of these principles can also strengthen systems security.

In a few months I have given versions of this talk around the world: Boston, London, Athens, London, Cambridge, Munich—to the security and psychology crowd, to computer researchers, to professional programmers—and it never failed to attract interest. This is what Yahoo’s Chris Heilmann wrote in his blog when I gave the talk at StackOverflow to an audience of 250 programmers:

The other talk I was able to attend was Frank Stajano, a resident lecturer and security expert (and mighty sword-bearer). His talk revolved around application security but instead of doing the classic “prevent yourself from XSS/SQL injection/CSRF” spiel, Frank took a different route. BBC TV in the UK has a program called The Real Hustle which shows how people are scammed by tricksters and gamblers and the psychology behind these successful scams. Despite the abysmal Guy Ritchie style presentation of the show, it is full of great information: Frank and a colleague conducted a detailed research and analysis of all the attacks and the reasons why they work. The paper on the research is available: Seven principles for systems security (PDF). A thoroughly entertaining and fascinating presentation and a great example of how security can be explained without sounding condescending or drowning the audience in jargon. I really hope that there is a recording of the talk.

I´m giving the talk again at the Computer Laboratory on Tuesday 17 November in the Security Seminars series. The full write-up is available for download as a tech report.

Interview with Steven Murdoch on Finextra

Today, Finextra (a financial technology news website), has published a video interview with me, discussing my research on banks using card readers for online banking, which was recently featured on TV.

In this interview, I discuss some of the more technical aspects of the attacks on card readers, including the one demonstrated on TV (which requires compromising a Chip & PIN terminal), as well as others which instead require that the victim’s PC be compromised, but which can be carried out on a larger scale.

I also compare the approaches taken by the banking community to protocol design, with that of the Internet community. Financial organizations typically develop protocols internally, and so are subject to public scrutiny late in deployment, if at all. This is in contrast with Internet protocols which are commonly first discussed within industry and academia, then the specification is made public, and only then is it implemented. As a consequence, vulnerabilities in banking security systems are often more expensive to fix.

Also, I discuss some of the non-technical design decisions involved in the deployment of security technology. Specifically, their design needs to take into account risk analysis, psychology and usability, not just cryptography. Organizational structures also need to incentivize security; groups who design security mechanisms should be responsible for failure. Organizational structures should also discourage knowledge of security failings from being hidden from management. If necessary a separate penetration testing team should report directly to board level.

Finally I mention one good design principle for security protocols: “make everything as simple as possible, but not simpler”.

The video (7 minutes) can be found below, and is also on the Finextra website.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

Security psychology

I have put together a web page on psychology and security. There is a fascinating interplay between these two subjects, and their intersection is now emerging as a new research discipline, encompassing deception, risk perception, security usability and a number of other important topics. I hope that the new web page will be as useful in spreading the word as my security economics page has been in that field.

apComms backs ISP cleanup activity

The All Party Parliamentary Communications Group (apComms) recently published their report into an inquiry entitled “Can we keep our hands off the net?”

They looked at a number of issues, from “network neutrality” to how best to deal with child sexual abuse images. Read the report for the all the details; in this post I’m just going to draw attention to one of the most interesting, and timely, recommendations:

51. We recommend that UK ISPs, through Ofcom, ISPA or another appropriate
organisation, immediately start the process of agreeing a voluntary code for
detection of, and effective dealing with, malware infected machines in the UK.
52. If this voluntary approach fails to yield results in a timely manner, then we further recommend that Ofcom unilaterally create such a code, and impose it upon the UK ISP industry on a statutory basis.

The problem is that although ISPs are pretty good these days at dealing with incoming badness (spam, DDoS attacks etc) they can be rather reluctant to deal with customers who are malware infected, and sending spam, DDoS attacks etc to other parts of the world.

From a “security economics” point of view this isn’t too surprising (as I and colleagues pointed out in a report to ENISA). Customers demand effective anti-spam, or they leave for another ISP. But talking to customers and holding their hand through a malware infection is expensive for the ISP, and customers may just leave if hassled, so the ISPs have limited incentives to take any action.

When markets fail to solve problems, then you regulate… and what apComms is recommending is that a self-regulatory solution be given a chance to work. We shall have to see whether the ISPs seize this chance, or if compulsion will be required.

This UK-focussed recommendation is not taking place in isolation, there’s been activity all over the world in the past few weeks — in Australia the ISPs are consulting on a Voluntary Code of Practice for Industry Self-regulation in the Area of e-Security, in the Netherlands the main ISPs have signed an “Anti-Botnet Treaty“, and in the US the main cable provider, Comcast, has announced that its “Constant Guard” programme will in future detect if their customer machines become members of a botnet.

ObDeclaration: I assisted apComms as a specialist adviser, but the decision on what they wished to recommend was theirs alone.