Privacy with technology: where do we go from here?

As part of the Royal Society Summer Science Exhibition 2014, I spoke at the panel session “Privacy with technology: where do we go from here?”, along with Ross Anderson, and Bashar Nuseibeh with Jon Crowcroft as chair.

The audio recording is available and some notes from the session are below.

The session started with brief presentations from each of the panel members. Ross spoke on the economics of surveillance and in particular network effects, the topic of his paper at WEIS 2014.

Bashar discussed the difficulties of requirements engineering, as eloquently described by Billy Connolly. These challenges are particularly acute when it comes to designing for privacy requirements, especially for wearable devices with their limited ability to communicate with users.

I described issues around surveillance on the Internet, whether by governments targeting human rights workers or advertisers targeting pregnant customers. I discussed how anonymous communication tools, such as Tor, can help defend against such surveillance.

The panel discussion then moved to the “Right to be Forgotten” ruling, its impact and how US and EU laws and practice are diverging. In particular, how should the ideas behind the Rehabilitation of Offenders Act influence the availability of information on past criminal acts on the Internet?

On privacy of health data, the risks of misuse include “disaster scenarios” such as having the identity everyone’s biological parents being disclosed (including those who didn’t realise their paternity was misattributed). But that doesn’t mean that no sensitive information should ever be disclosed; instead people should be allowed to make appropriate tradeoffs balancing the risks with the potential benefits. For example, teenagers have been shown to make careful privacy decisions online despite some casual observers having previously made throw-away remarks to the effect that teenagers don’t care about privacy.

Opening up the discussion to the floor, questions included the effectiveness of privacy-preserving search engines like DuckDuckGo, which promise not to track users (but give no ability for users to verify their claims) as contrasted to Tor which maintains its security even if partially compromised. The EU Data Retention Directive (which was declared “invalid” by the European Court of Justice in April 2014) was discussed, though subsequent to this panel the Data Retention and Investigatory Powers Act 2014 was passed, complicating the situation further.

One potential privacy enhancing technique is obfuscation, whether though tradecraft to avoid determined surveillance or just white lies to smooth social situations. However such obfuscation is becoming harder now that there are so many sources of information which can be tied together, such as from CCTV, mobile phones and wearable computers. Such pervasive surveillance might help catch some criminals, but whose definition of criminal should be used – “one man’s terrorist in another man’s freedom fighter”?

One challenge which comes from the large amounts of data available is that extracting value is difficult, whether that is from highly structured information from media outlets or less structured information on the web or email archives. As information has many uses, its value will vary greatly depending on who is asking and its cost could drastically drop when there’s more than one company which can provide it.

A question was raised on whether social conventions will change in response to the reduction in privacy, though often social conventions are more complex than they might at first seem and they may be preserved even after the reason for their existence has passed.

Cloud providers offer advantages and disadvantages when it comes to security. On one hand they have the skills to defend against attacks far more effectively than the average business could do by themselves. On the other hand they become a far more tempting target for intelligence agencies and hackers. A further reason for the lack of trust in technology is the suspicion (now confirmed by the Snowden revelations) that the NSA has been adding back-doors in cryptography. Tor is developing ways to increase trust through deterministic builds. Researchers are also developing algorithms which are robust to flawed random number generators (one of the NSA’s favoured back-door techniques).

Overall the panel proved to be a popular and interesting session, with a diverse range of questions from the audience.

Leave a Reply

Your email address will not be published. Required fields are marked *