Posts filed under 'Social networks

Sep 23, '10

The New York Times has followed up the recent Twitter hack with an online debate on social network security for which I wrote a short piece.

Jun 1, '10

The book “Digital Activism Decoded: The New Mechanics of Change” is one of the first on the topic of digital activism. It discusses how digital technologies as diverse as the Internet, USB thumb-drives, and mobile phones, are changing the nature of contemporary activism.

Each of the chapters offers a different perspective on the field. For example, Brannon Cullum investigates the use of mobile phones (e.g. SMS, voice and photo messaging) in activism, a technology often overlooked but increasingly important in countries with low ratios of personal computer ownership and poor Internet connectivity. Dave Karpf considers how to measure the success of digital activism campaigns, given the huge variety of (potentially misleading) metrics available such as page impression and number of followers on Twitter. The editor, Mary Joyce, then ties each of these threads together, identifying the common factors between the disparate techniques for digital activism, and discussing future directions.

My chapter “Destructive Activism: The Double-Edged Sword of Digital Tactics” shows how the positive activism techniques promoted throughout the rest of the book can also be used for harm. Just as digital tools can facilitate communication and create information, they can also be used to block and destroy. I give some examples where these events have occurred, and how the technology to carry out these actions came to be created and deployed. Of course, activism is by its very nature controversial, and so is where to draw the line between positive and negative actions. So my chapter concludes with a discussion of the ethical frameworks used when considering the merits of activism tactics.

Digital Activism Decoded, published by iDebate Press, is now available for download, and can be pre-ordered from Amazon UK or Amazon US (available June 30th now).

Update (2010-06-17): Amazon now have the book in stock at both their UK and US stores.

Digital Activism Decoded

Feb 12, '10

Google Buzz has been rolled out to 150M Gmail users around the world. In their own words, it’s a service to start conversations and share things with friends. Cynics have said it’s a megalomaniacal attempt to leverage the existing user base to compete with Facebook/Twitter as a social hub. Privacy advocates have rallied sharply around a particular flaw: the path of least-resistance to signing up for Buzz includes automatically following people based on Buzz’s recommendations from email and chat frequency, and this “follower” list is completely public unless you find the well-hidden privacy setting. As a business decision, this makes sense, the only chance for Buzz to make it is if users can get started very quickly. But this is a privacy misstep that a mandatory internal review would have certainly objected to. Email is still a private, personal medium. People email their mistresses, workers email about job opportunities, reporters email anonymous sources all with the same emails they use for everything else. Besides the few embarrassing incidents this will surely cause, it’s fundamentally playing with people’s perceptions of public and private online spaces and actively changing social norms, as my colleague Arvind Narayanan spelled out nicely.

Perhaps more interesting than the pundit’s responses though is the ability to view thousands of user’s reactions to Buzz as they happen. Google’s design philosophy of “give minimal instructions and just let users type things into text boxes and see what happens” preserved a virtual Pompeii of confused users trying to figure out what the new thing was and accidentally broadcasting their thoughts to the entire Internet. If you search Buzz for words like “stupid,” “sucks,” and “hate” the majority of the conversation so far is about Buzz itself. Thoughts are all over the board: confusion, stress, excitement, malaise, anger, pleading. Thousands of users are badly confused by Google’s “follow” and “profile” metaphors. Others are wondering how this service compares to the competition. Many just want the whole thing to go away (leading a few how-to guides) or are blasting Google or blasting others for complaining.

It’s a major data mining and natural language processing challenge to analyze the entire body of reactions to the new service, but the general reaction is widespread disorientation and confusion. In the emerging field of security psychology, the first 48 hours of Buzz posts could provide be a wealth of data about about how people react when their privacy expectations are suddenly shifted by the machinations of Silicon Valley.

Feb 4, '10

Facebook is rolling out two new features with privacy implications, an app dashboard and a gaming dashboard. Take a 30 second look at the beta versions which are already live (with real user data) and see if you spot any likely problems. For the non-Facebook users, the new interfaces essentially provide a list of applications that your friends are using, including “Recent Activity” which lists when applications were used. What could possibly go wrong?

Well, some users may use applications they don’t want their friend to know about, like dating or job-search. And they certainly may not want others to know the time they used an application, if this makes it clear that they were playing a game on company time. This isn’t a catastrophic privacy breach, but it will definitely lead to a few embarrassing situations. As I’ve argued before, users should have a basic privacy expectation that if they continue to use a service in a consistent way, data won’t be shared in a new, unexpected manner of which they have no warning or control, and this new feature violates that expectation. The interesting thing is how Facebook is continually caught by surprise when their spiffy new features upset users. They seem equally clueless with their response: allowing developers to opt an application out of appearing on the dashboard. Developers have no incentive to do this, as they want maximum exposure for their apps. A minimally acceptable solution must allow users to opt themselves out.

It’s inexcusable that Facebook doesn’t appear to have a formal privacy testing process to review new features and recommend fixes before they go live. The site is quite complicated, but a small team should be able to identify the issues with something like the new dashboard in a day’s work. It could be effective with with 1% of the manpower of the company’s nudity cops. Notably, Facebook is trying to resolve a class-action lawsuit over their Beacon fiasco by creating an independent privacy foundation, which privacy advocates and users have both objected to. As a better way forward, I’d call for creating an in-house “privacy ombudsmen” team, which has the authority to review new features and publish analysis of them, as a much more direct step to preventing future privacy failures.

Dec 11, '09

Facebook has been rolling out new privacy settings in the past 24 hours along with a “privacy transition” tool that is supposed to help users update their settings.  Ostensibly, Facebook’s changes are the result of pressure from the Canadian privacy commissioner, and in Facebook’s own words the changes are meant to be “new tools to control your experience.” The changes have been harshly criticized in a number of high-profile places:  the New York Times, Wired, CnetTechCrunch, Valleywag, ReadWriteWeb, and by the the EFF and the ACLU. The ACLU has the most detailed technical summary of changes, essentially there are more granular controls but many more things will default to “open to everyone.” It’s most telling to check the blogs used by Facebook developers and marketers with a business interest in the matter. Their take is simple: a lot more information is about to be shared and developers need to find out how to use it.

The most discussed issue is the automatic change to more open-settings, which will lead to privacy breaches of the socially-awkward variety, as users will accidentally post something that the wrong person can read. This will assuredly happen more frequently as a direct result of these changes, even though Facebook is trying to force users to read about the new settings, it’s a safe bet that users won’t read any of it. Many people learn how Facebook works by experience, they expect it to keep working that way and it’s a bad precedent to change that when it’s not necessary. The fact that Facebook’s “transition wizard” includes one column of radio buttons for “keep my old settings” and a pre-selected column for “switch to the new settings Facebook wants me to have” shows that either they don’t get it or they really don’t respect their users. Most of this isn’t surprising though: I wrote in June that Facebook would be automatically changing user settings to be more open, TechCrunch also saw this coming in July.

There’s a much more surprising bit which has been mostly overlooked-it’s now impossible for any user to hide their friend list from being globally viewable to the Internet at large. Facebook has a few shameful cop-out statements about this, stating that you can remove it from your default profile view if you wish, but since (in their opinion) it’s “publicly available information”  you can’t hide it from people who really want to see it. It has never worked this way previously, as hiding one’s friend list was always an option, and there have been many research papers, including a few by me and colleagues in Cambridge, concluding that the social graph is actually the most important information to keep private. The threats here are more fundamental and dangerous-unexpected inference of sensitive information, cross-network de-anonymisation, socially targeted phishing and scams.

It’s incredibly disappointing to see Facebook ignoring a growing body of scientific evidence and putting its social graph up for grabs. It will likely be completely crawled fairly soon by professional data aggregators, and probably by enterprising researchers soon after. The social graph is powerful view into who we are—Mark Zuckerberg said so himself—and  it’s a sad day to see Facebook cynically telling us we can’t decide for ourselves whether or not to share it.

UPDATE 2009-12-11: Less than 12 hours after publishing this post, Facebook backed down citing criticism and made it possible to hide one’s friend list. They’ve done this in a laughably ham-handed way, as friend-list visibility is now all-or-nothing while you can set complex ACLs on most other profile items. It’s still bizarre that they’ve messed with this at all, for years the default was in fact to only show your friend list to other friends. One can only conclude that they really want all users sharing their friend list, while trying to appear privacy-concerned: this is precisely the “privacy communication game” which Sören Preibusch and I wrote of in June. This remains an ignoble moment for Facebook-the social graph will still become mostly public as they’ll be changing overnight the visibility of hundreds of millions of users’ friends lists who don’t find this well-hidden opt-out.

Aug 5, '09

I wrote about the mess caused by Facebook’s insecure application platform nearly 2 months ago. I also wrote about the long-term problems with “informed consent” for data use in social networks. In the past week, both problems came to a head as users began complaining about multiple third-party ad networks using their photos in banner ads. When I mentioned this problem in June, Facebook had just shut down the biggest ad networks for “deceptive practices,” specifically by duping users into a US$20 per month ringtone subscription. The void created by banning SocialReach and SocialHour apparently led to many new advertisers popping up in their place, with most carrying on the practice of using user photos to hawk quizzes, dating services, and the like. The ubiquitous ads annoyed enough users that Facebook was convinced to ban advertisers from using personal data. This is a welcome move, but Facebook underhandedly inserted a curious new privacy setting at “Privacy Settings->News Feed and Wall->Facebook Ads”:

Facebook does not give third party applications or ad networks the right to use your name or picture in ads. If this is allowed in the future, this setting will govern the usage of your information.

With this change, Facebook has quietly reserved the right to re-allow applications to utilise user data in ads in the future and opted everybody in to the feature. We’ve written about social networks minimising privacy salience, but this is plainly deceptive. It’s hard not to conclude this setting was purposefully hidden from sight, as ads shown by third-party applications have nothing to do with the News Feed or Wall. The choices of “No One” or “Only Friends” are also obfuscating, as only friends’ applications can access data from Facebook’s API to begin with; this is a simple “opt-out” checkbox dressed up to making being opted in seem more private. Meanwhile, Facebook has been showing users a patronising popup message on log-in:

Worried about privacy? Your photos are safe. There have been misleading rumors recently about the use of your photos in ads. Don’t believe them. These rumors were related to third-party applications, and not ads shown by Facebook. Get the whole story at the Facebook Blog, or check out the Help Center.

This message is misleading, if not outright dishonest, and shows an alarming dismissal of what was a widespread practice that offended many users. People weren’t concerned with whether their photos were sent to advertisers by Facebook itself or third-parties. They don’t want their photos or names used or stored by advertisers regardless of the technical details. The platform API remains fundamentally broken and gives users no way to prevent applications from accessing their photos. Facebook would be best served by fixing this instead of dismissing users’ concern for privacy as “misleading rumors.”

Jun 26, '09

We often think of social networking to Facebook, MySpace, and the also-rans, but in reality there are there are tons of social networks out there, dozens which have membership in the millions. Around the world it’s quite a competitive market. Sören Preibusch and I decided to study the whole ecosystem to analyse how free-market competition has shaped the privacy practices which I’ve been complaining about. We carefully examined 45 sites, collecting over 250 data points about each sites’ privacy policies, privacy controls, data collection practices, and more. The results were fascinating, as we presented this week at the WEIS conference in London. Our full paper and complete dataset are now available online as well.

We collected a lot of data, and there was a little bit of something for everybody. There was encouraging news for fans of globalisation, as we found the social networking concept popular across many cultures and languages, with the most popular sites being available in over 40 languages. There was an interesting finding from a business perspective that photo-sharing may be the killer application for social networks, as this features was promoted far more often than sharing videos, blogging, or playing games. Unfortunately the news was mostly negative from a privacy standpoint. We found some predictable but still surprising problems. Too much unnecessary data is collected by most sites, 90% requiring a full-name and DOB. Security practices are dreadful: no sites employed phishing countermeasures, and 80% of sites failed to protect password entry using TLS. Privacy policies were obfuscated and confusing, and almost half failed basic accessibility tests. Privacy controls were confusing and overwhelming, and profiles were almost universally left open by default.

The most interesting story we found though was how sites consistently hid any mention of privacy, until we visited the privacy policies where they provided paid privacy seals and strong reassurances about how important privacy is. We developed a novel economic explanation for this: sites appear to craft two different messages for two different populations. Most users care about privacy about privacy but don’t think about it in day-to-day life. Sites take care to avoid mentioning privacy to them, because even mentioning privacy positively will cause them to be more cautious about sharing data. This phenomenon is known as “privacy salience” and it makes sites tread very carefully around privacy, because users must be comfortable sharing data for the site to be fun. Instead of mentioning privacy, new users are shown a huge sample of other users posting fun pictures, which encourages them to  share as well. For privacy fundamentalists who go looking for privacy by reading the privacy policy, though, it is important to drum up privacy re-assurance.

The privacy fundamentalists of the world may be positively influencing privacy on major sites through their pressure. Indeed, the bigger, older, and more popular sites we studied had better privacy practices overall. But the desire to limit privacy salience is also a major problem because it prevents sites from providing clear information about their privacy practices. Most users therefore can’t tell what they’re getting in to, resulting in the predominance of poor-practices in this “privacy jungle.”

Jun 18, '09

Last week Facebook announced the end of regional networks for access control. The move makes sense: regional networks had no authentication so information available to them was easy to get with a fake account. Still, silently making millions of weakly-restricted profiles globally viewable raises some disturbing questions. If Terms of Service promise to only share data consistent with users’ privacy settings, but the available privacy settings change as features are added, what use are the terms as a legal contract? This is just one instance of a major problem for rapidly evolving web pages which rely on a static model of informed consent for data collection. Even “privacy fundamentalists” who are careful to read privacy policies and configure their privacy settings can’t be confident of their data’s future for three main reasons:

  • Functionality Changes: Web 2.0 sites add features constantly, usually with little warning or announcement. Users are almost always opted-in for fear that features won’t get noticed otherwise. Personal data is shared before users have any chance to opt out. Facebook has done this repeatedly, opting users in to NewsFeed, Beacon, Social Ads, and Public Search Listings. This has generated a few sizeable backlashes, but Facebook maintains that users must try new features in action before they can reasonably opt out.
  • Contractual Changes: Terms of Service documents can often be changed without notice, and users automatically agree to the new terms by continuing to use the service. In a study we’ll be publishing at WEIS next month evaluating 45 social networking sites, almost half don’t guarantee to announce changes to their privacy policies. Less than 10% of the sites commit to a mandatory notice period before implementing changes (typically a week or less). Realistically, at least 30 days are needed for fundamentalists to read the changes and cancel their accounts if they wish.
  • Ownership Changes: As reported in the excellent survey of web privacy practices by the KnowPrivacy project at UC Berkeley, the vast majority (over 90%) of sites explicitly reserve the right to share data with ‘affiliates’ subject only to the affiliate’s privacy policy. Affiliate is an ambiguous term but it includes at least  parent companies and their subsidiaries. If your favourite web site gets bought out by an international conglomerate, your data is transferred to the new owners who can instantly start using it under their own privacy policy. This isn’t an edge case, it’s a major loophole: websites are bought and sold all the time and for many startups acquisition is the business model.

For any of these reasons, the terms under which consent was given can be changed without warning. Safely disclosing personal data on the web thus requires continuously monitoring sites for new functionality, updated terms of service, or mergers, and instantaneously opting out if you are no longer comfortable. This is impossible even for privacy fundamentalists with an infinite amount of patience and legal knowledge, rendering the old paradigm of informed consent for data collection unworkable for Web 2.0.

Jun 9, '09

I’ve been writing a lot about privacy in social networks, and sometimes the immediacy gets lost during the more theoretical debates. Recently though I’ve been investigating a massive privacy breach on Facebook’s application platform which serves as a sobering case study. Even to me, the extent of unauthorised data flow I found and the cold economic motivations keeping it going were surprising. Facebook’s application platform remains a disaster from a privacy standpoint, dampening one of the more compelling features of the network.


May 20, '09

One of the defining features of Web 2.0 is user-uploaded content, specifically photos. I believe that photo-sharing has quietly been the killer application which has driven the mass adoption of social networks. Facebook alone hosts over 40 billion photos, over 200 per user, and receives over 25 million new photos each day. Hosting such a huge number of photos is an interesting engineering challenge. The dominant paradigm which has emerged is to host the main website from one server which handles user log-in and navigation, and host the images on separate special-purpose photo servers, usually on an external content-delivery network. The advantage is that the photo server is freed from maintaining any state. It simply serves its photos to any requester who knows the photo’s URL.

This setup combines the two classic forms of enforcing file permissions, access control lists and capabilities. The main website checks each request for a photo against an ACL, it then grants a capability to view a photo in the form of an obfuscated URL which can be sent to the photo-server. We wrote earlier about how it was possible to forge Facebook’s capability-URLs and gain unauthorised access to photos. Fortunately, this has been fixed and it appears that most sites use capability-URLs with enough randomness to be unforgeable. There’s another traditional problem with capability systems though: revocation. My colleagues Jonathan Anderson, Andrew Lewis, Frank Stajano and I ran a small experiment on 16 social-networking, blogging, and photo-sharing web sites and found that most failed to remove image files from their photo servers after they were deleted from the main web site. It’s often feared that once data is uploaded into “the cloud,” it’s impossible to tell how many backup copies may exist and where, and this provides clear proof that content delivery networks are a major problem for data remanence. (more…)


April 2014
« Mar    

Posts by Month

Posts by Category