Monthly Archives: May 2009

Attack of the Zombie Photos

One of the defining features of Web 2.0 is user-uploaded content, specifically photos. I believe that photo-sharing has quietly been the killer application which has driven the mass adoption of social networks. Facebook alone hosts over 40 billion photos, over 200 per user, and receives over 25 million new photos each day. Hosting such a huge number of photos is an interesting engineering challenge. The dominant paradigm which has emerged is to host the main website from one server which handles user log-in and navigation, and host the images on separate special-purpose photo servers, usually on an external content-delivery network. The advantage is that the photo server is freed from maintaining any state. It simply serves its photos to any requester who knows the photo’s URL.

This setup combines the two classic forms of enforcing file permissions, access control lists and capabilities. The main website checks each request for a photo against an ACL, it then grants a capability to view a photo in the form of an obfuscated URL which can be sent to the photo-server. We wrote earlier about how it was possible to forge Facebook’s capability-URLs and gain unauthorised access to photos. Fortunately, this has been fixed and it appears that most sites use capability-URLs with enough randomness to be unforgeable. There’s another traditional problem with capability systems though: revocation. My colleagues Jonathan Anderson, Andrew Lewis, Frank Stajano and I ran a small experiment on 16 social-networking, blogging, and photo-sharing web sites and found that most failed to remove image files from their photo servers after they were deleted from the main web site. It’s often feared that once data is uploaded into “the cloud,” it’s impossible to tell how many backup copies may exist and where, and this provides clear proof that content delivery networks are a major problem for data remanence. Continue reading Attack of the Zombie Photos

Location privacy

I was recently asked for a brief (4-page) invited paper for a forthcoming special issue of the ACM SIGSPATIAL on privacy and security of location-based systems, so I wrote Foot-driven computing: our first glimpse of location privacy issues.

In 1989 at ORL we developed the Active Badge, the first indoor location system: an infrared transmitter worn by personnel that allowed you to tell which room the wearer was in. Every press and TV reporter who visited our lab worried about the intrusiveness of this technology; yet, today, all those people happily carry mobile phones through which they can be tracked anywhere they go. The significance of the Active Badge project was to give us a head start of a few years during which to think about location privacy before it affected hundreds of millions of people. (There is more on our early ubiquitous computing work at ORL in this free excerpt from my book.)
The ORL Active Badge

Location privacy is a hard problem to solve, first because ordinary people don’t seem to actually care, and second because there is a misalignment of incentives: those who could do the most to address the problem are the least affected and the least concerned about it. But we have a responsibility to address it, in the same way that designers of new vehicles have a responsibility to address the pollution and energy consumption issue.

Security economics video

Here is a video of a talk I gave at DMU on security economics (and the slides). I’ve given variants of this survey talk at various conferences over the past two or three years; at last one of them recorded the talk and put the video online. There’s also a survey paper that covers much of the same material. If you find this interesting, you might enjoy coming along to WEIS (the Workshop on the Economics of Information Security) on June 24-25.

Reducing interruptions with screentimelock

Sometimes I find that I need to concentrate, but there are too many distractions. Emails, IRC, and Twitter are very useful, but also create interruptions. For some types of task this is not a problem, but for others the time it takes to get back to being productive after an interruption is substantial. Or sometimes there is an imminent and important deadline and it is desirable to avoid being sidetracked.

Self-discipline is one approach for these situations, but sometimes it’s not enough. So I wrote a simple Python script — screentimelock — for screen which locks the terminal for a period of time. I don’t need to use this often, but since my email, IRC, and Twitter clients all reside in a screen session, I find it works well for me,

The script is started by screen’s lockscreen command, which is by default invoked by Ctrl-A X. Then, the screen will be cleared, which is helpful as often I find that just seeing the email subject lines is enough to act as a distraction. The screen will remain cleared and the terminal locked, until the next hour (e.g. if the script is activated at 7:15, it will unlock at 8:00).

It is of course possible to bypass the lock. Ctrl-C is ignored, but logging in from a different location and either killing the script or re-attaching the screen will work. Still, this is far more effort than glancing at the terminal, so I find the speed-bump screentimelock provides is enough to avoid temptation.

I’m releasing this software, under the BSD license, in the hope that other people find it useful. The download link, installation instructions and configuration parameters can be found on the screentimelock homepage. Any comments would be appreciated, but despite Zawinski’s Law, this program will not be extended to support reading mail 🙂