What does Detica detect?

There has been considerable interest in a recent announcement by Detica of “CView” which their press release claims is “a powerful tool to measure copyright infringement on the internet”. The press release continues by saying that it will provide “a measure of the total volume of unauthorised file sharing”.

Commentators have divided as to whether these claims are nonsense, or whether the system must be deeply intrusive. The main reason for this is that when peer-to-peer file sharing flows are encrypted, it is impossible for a passive observer to know what is being transferred.

I met with Detica last Friday, at their suggestion, to discuss what their system actually did (they’ve read some of my work on Phorm’s system, so meeting me was probably not entirely random). With their permission, I can now explain the basics of what they are actually doing. A more detailed account should appear at some later date.
Continue reading What does Detica detect?

RIP memes

There was a discussion a little while back on the UKCrypto mailing list about how the UK Regulation of Investigatory Powers Act came to be so specifically associated in the media with terrorism, when it is far more general than that ( see for example: “Anti-terrorism laws used to spy on noisy children” ).

I suggested that this “meme” might well be traced back to the Home Office website’s quick overview text which used to say (presumably before they thought better of it):

The Regulation of Investigatory Powers Act (RIPA) legislates for using various methods of surveillance and information gathering for the prevention of crime including terrorism.

Well, I’ve just noticed another source of memes (which may be new, since Google are continually experimenting with their system. or which may have been there for simply ages, unnoticed by me at least).
Continue reading RIP memes

How to vote anonymously under ubiquitous surveillance

In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.

This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?

The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.

It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.

In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.

With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.

A preprint of the paper is available here, and the slides here.

The Real Hustle and the psychology of scam victims

This, which started as a contribution to Ross’s Security and Psychology initiative, is probably my most entertaining piece of research this year and it’s certainly getting its bit of attention.

I’ve been a great fan of The Real Hustle since 2006, which I recommend to anyone with an interest in security, and it has been good fun to work with the TV show’s coauthor Paul Wilson on this paper. We analyze the scams reproduced in the show, we extract general principles from them that describe typical behavioural patterns exploited by hustlers and then we show how an awareness of these principles can also strengthen systems security.

In a few months I have given versions of this talk around the world: Boston, London, Athens, London, Cambridge, Munich—to the security and psychology crowd, to computer researchers, to professional programmers—and it never failed to attract interest. This is what Yahoo’s Chris Heilmann wrote in his blog when I gave the talk at StackOverflow to an audience of 250 programmers:

The other talk I was able to attend was Frank Stajano, a resident lecturer and security expert (and mighty sword-bearer). His talk revolved around application security but instead of doing the classic “prevent yourself from XSS/SQL injection/CSRF” spiel, Frank took a different route. BBC TV in the UK has a program called The Real Hustle which shows how people are scammed by tricksters and gamblers and the psychology behind these successful scams. Despite the abysmal Guy Ritchie style presentation of the show, it is full of great information: Frank and a colleague conducted a detailed research and analysis of all the attacks and the reasons why they work. The paper on the research is available: Seven principles for systems security (PDF). A thoroughly entertaining and fascinating presentation and a great example of how security can be explained without sounding condescending or drowning the audience in jargon. I really hope that there is a recording of the talk.

I´m giving the talk again at the Computer Laboratory on Tuesday 17 November in the Security Seminars series. The full write-up is available for download as a tech report.

Interview with Steven Murdoch on Finextra

Today, Finextra (a financial technology news website), has published a video interview with me, discussing my research on banks using card readers for online banking, which was recently featured on TV.

In this interview, I discuss some of the more technical aspects of the attacks on card readers, including the one demonstrated on TV (which requires compromising a Chip & PIN terminal), as well as others which instead require that the victim’s PC be compromised, but which can be carried out on a larger scale.

I also compare the approaches taken by the banking community to protocol design, with that of the Internet community. Financial organizations typically develop protocols internally, and so are subject to public scrutiny late in deployment, if at all. This is in contrast with Internet protocols which are commonly first discussed within industry and academia, then the specification is made public, and only then is it implemented. As a consequence, vulnerabilities in banking security systems are often more expensive to fix.

Also, I discuss some of the non-technical design decisions involved in the deployment of security technology. Specifically, their design needs to take into account risk analysis, psychology and usability, not just cryptography. Organizational structures also need to incentivize security; groups who design security mechanisms should be responsible for failure. Organizational structures should also discourage knowledge of security failings from being hidden from management. If necessary a separate penetration testing team should report directly to board level.

Finally I mention one good design principle for security protocols: “make everything as simple as possible, but not simpler”.

The video (7 minutes) can be found below, and is also on the Finextra website.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

Security psychology

I have put together a web page on psychology and security. There is a fascinating interplay between these two subjects, and their intersection is now emerging as a new research discipline, encompassing deception, risk perception, security usability and a number of other important topics. I hope that the new web page will be as useful in spreading the word as my security economics page has been in that field.

apComms backs ISP cleanup activity

The All Party Parliamentary Communications Group (apComms) recently published their report into an inquiry entitled “Can we keep our hands off the net?”

They looked at a number of issues, from “network neutrality” to how best to deal with child sexual abuse images. Read the report for the all the details; in this post I’m just going to draw attention to one of the most interesting, and timely, recommendations:

51. We recommend that UK ISPs, through Ofcom, ISPA or another appropriate
organisation, immediately start the process of agreeing a voluntary code for
detection of, and effective dealing with, malware infected machines in the UK.
52. If this voluntary approach fails to yield results in a timely manner, then we further recommend that Ofcom unilaterally create such a code, and impose it upon the UK ISP industry on a statutory basis.

The problem is that although ISPs are pretty good these days at dealing with incoming badness (spam, DDoS attacks etc) they can be rather reluctant to deal with customers who are malware infected, and sending spam, DDoS attacks etc to other parts of the world.

From a “security economics” point of view this isn’t too surprising (as I and colleagues pointed out in a report to ENISA). Customers demand effective anti-spam, or they leave for another ISP. But talking to customers and holding their hand through a malware infection is expensive for the ISP, and customers may just leave if hassled, so the ISPs have limited incentives to take any action.

When markets fail to solve problems, then you regulate… and what apComms is recommending is that a self-regulatory solution be given a chance to work. We shall have to see whether the ISPs seize this chance, or if compulsion will be required.

This UK-focussed recommendation is not taking place in isolation, there’s been activity all over the world in the past few weeks — in Australia the ISPs are consulting on a Voluntary Code of Practice for Industry Self-regulation in the Area of e-Security, in the Netherlands the main ISPs have signed an “Anti-Botnet Treaty“, and in the US the main cable provider, Comcast, has announced that its “Constant Guard” programme will in future detect if their customer machines become members of a botnet.

ObDeclaration: I assisted apComms as a specialist adviser, but the decision on what they wished to recommend was theirs alone.

Economics of peer-to-peer systems

A new paper, Olson’s Paradox Revisited: An Empirical Analysis of File-Sharing Behaviour in P2P Communities, finds a positive correlation between the size of a BitTorrent file-sharing community and the amount of content shared, despite a reduced individual propensity to share in larger groups, and deduces from this that file-sharing communities provide a pure (non-rival) public good. Forcing users to upload results in a smaller catalogue; but private networks provide both more and better content, as do networks aimed at specialised communities.

George Danezis and I produced a theoretical model of this five years ago in The Economics of Censorship Resistance. It’s nice to see that the data, now collected, bear us out