Persec 2006 and Naccache on tapping mobile phones

Over the past couple of months I attended about half a dozen events around the world (Brussels, Pisa (x3), Tokyo, Cambridge, York, Milan), often as invited speaker, but failed to mention them here. While I won’t promise that I will ever catch up with the reporting, let me at least start.

I was, with Ari Juels of RSA Labs, program chair of IEEE PerSec 2006, the security workshop of the larger PerCom conference, held in March 2006 in Pisa, Italy. I previously mentioned the rfid virus paper by Rieback et al when it got the (second) best paper award: that was the paper I found most enjoyable of the ones in the main track.

Ari and I invited David Naccache as the keynote speaker of our workshop. This was, if I may say so myself, an excellent move: for me, his talk was by far the most interesting part of the whole workshop and conference. Now a professor at the École Normale Supérieure in Paris, David was until recently a security expert at leading smartcard manufacturer Gemplus. Among other things, his talents allow him to help law enforcement agencies tap the bad guys’s cellphones, read the numbers in their phone books and find out where they have been.

His talk was very informative and entertaining, full of fascinating war stories such as the tricks used to steal covertly an expired session key from the phone of a suspect to decrypt a recorded phone call that had been intercepted earlier as cyphertext. The target was asleep in a hotel room, with his phone under recharge on his bed table, and the author and his agents were in the next room, doing their electronic warfare from across the wall. What do you do in a case like this? pretend to be the base station, reissue the old challenge so that the SIM generates the same session key, and then listen to the electromagnetic radiation from the pads of the SIM while the key is being transmitted to the handset via the SIM’s electric contacts. Brilliant. And just one in a rapid-fire sequence of other equally interesting real life stories.

David, like many of the other speakers at the workshop, has kindly allowed me to put up his paper and presentation slides on the workshop’s web site. It won’t be as good as his outstanding live talk, but you may still find it quite interesting.

On the same page you will also find two more papers by members of the Cambridge security group: one on multi-channel protocols by Ford-Long Wong and yours truly, and one attacking key distribution schemes in sensor networks by Tyler Moore.

Why so many CCTVs in UK? (again)

I previously blogged about Prof. Martin Gill’s brilliant talk on CCTV at the Institute of Criminology.

I invited him to give it again as a Computer Laboratory seminar. He will do so on Wed 2006-05-17, 14:15. If you are around, do come along—highly recommended, and open to all. Title and abstract follow.

CCTV in the UK: A failure of theory or a failure of practice?

Although CCTV was heralded as something of a silver bullet in the fight against crime (and by two Governments) scholarly research has questioned the extent to which it ‘works’. Martin Gill led the Home Office national evaluation on CCTV and has subsequently conducted more research with CCTV schemes across the country. In this talk he will outline the findings from the national evalaution and assess the views of the public, scheme workers and offenders’ perspectives (including showing film clips of offenders talking at crime scenes) to show just why CCTV has not worked out as many considered. Martin will relate these findings to the current development of a national strategy.

The Internet and Elections: the 2006 Presidential Election in Belarus

On Thursday, the OpenNet Initiative released their report, to which I contributed, studying Internet Censorship in Belarus during the 2006 Presidential Election there. It even has managed a brief mention in the New York Times.

In summary, we did find suspicious behaviour, particularly in the domain name system (DNS), the area I mainly explored, but no proof of outright filtering. It is rarely advisable to attribute to malice what can just as easily be explained by incompetence, so it is difficult to draw conclusions about what actually happened solely from the technical evidence. However, regardless of whether this was the first instance the ONI has seen of a concerted effort to hide state censorship, or simply an unfortunate coincidence of network problems, it is clear that existing tools for Internet monitoring are not adequate for distinguishing between these cases.

Simply observing that a site is inaccessible from within the country being studied is not enough evidence to demonstrate censorship, because it is also possible that the server or its network connection is down. For this reason, the ONI simultaneously checks from an unrestricted Internet connection. If the site is inaccessible from both connections, it is treated as being down. Censorship is only attributed if the site can be reliably accessed from the unrestricted connection, but not by the in-country testers. This approach has been very successful at analysing previously studied censorship regimes but could not positively identify censorship in Belarus. Here sites were inaccessible (often intermittently) from all Internet connections tried.

Ordinarily this result would be assumed to simply be from network or configuration errors; however the operators of these sites claimed the faults were caused by denial of service (DoS) attacks, hacking attempts or other government orchestrated efforts. Because many of the sites or their domain names were hosted in Belarus, and given the state strangle-hold on communication infrastructure, these claims were plausible, but generating evidence is difficult. On the client side, the coarse results available from the current ONI testing software are insufficient to combat the subtlety of the alleged attacks.

What is needed is more intelligent software, which tries to establish, at the packet level, exactly why a particular connection fails. Network debugging tools exist, but are typically designed for experts, whereas in the anti-censorship scenario the volunteers in the country being studied should not need to care about these details. Instead the software should perform basic analysis before securely sending the low-level diagnostic information back to a central location for further study.

There is also a place for improved software at the server side. In response to reports of DoS and hacking attacks we requested logs from the administrators of the sites in question to substantiate the allegations, but none were forthcoming. A likely and understandable reason is that the operators did not want to risk the privacy of their visitors by releasing such sensitive information. Network diagnostic applications on the server could be adapted to generate evidence of attacks, while protecting the identity of users. Ideally the software would also resist fabrication of evidence, but this might be infeasible to do robustly.

As the relevance of the Internet to politics grows, election monitoring will need to adapt accordingly. This brings new challenges so both the procedures and tools used must change. Whether Belarus was the first example of indirect state censorship seen by the ONI is unclear, but in either case I suspect it will not be the last.

Covert conflict in social networks

Last summer Ross Anderson and myself published a technical report titled “the topology of covert conflict” with preliminary results on attacks and defences in complex networks. We explored various tactical and strategic options available to combatants involved in conflict. The paper has now been accepted for publication at WEIS 2006.

This work has also been under discussion at various blogs and websites:

D-Link settles!

All the fuss about D-Link’s usage of the Danish-based stratum 1 time server seems to have had one good result. Poul-Henning Kamp’s web page has the following announcement this morning:

“D-Link and Poul-Henning Kamp announced today that they have amicably resolved their dispute regarding access to Mr. Kamp’s GPS.Dix.dk NTP Time Server site. D-Link’s existing products will have authorized access to Mr. Kamp’s server, but all new D-Link products will not use the GPS.Dix.dk NTP time server. D-Link is dedicated to remaining a good corporate and network citizen.”

which was nice.

Time will tell if D-Link has arranged their firmware to avoid sending undesirable traffic to other stratum 1 time servers as well, but at least the future well-being of Poul-Henning’s machine is assured.

Browser storage of passwords: a risk or opportunity?

Most web browsers are happy to remember user’s passwords, but many banks disable this feature on their website, shifting the task to customers. This decision might have been rational when malware was the major threat, but doing so hides a cue shown when a known website changes its address. The rise of phishing could thus make their choice counter-productive. We discuss why.

“Autocompletion”, provided by Mozilla/Firefox, Internet Explorer and Opera, saves details entered in web forms, including passwords. This improves usability, as users are no longer required to remember passwords but has some adverse effects on security (we leave aside the privacy problems). In particular, passwords must be stored unencrypted, so putting them at risk of compromise, both by other users of the same computer and malware on the machine. Mozilla improves the situation slightly, by allowing the password database to be encrypted on the hard disk, and unlocked with a master password. However, this is not the default so few will use it; in either case if the browser is left running other users can exploit the passwords, and malware can take them from the process memory.

For this reason, many banks have disabled password autocompletion, by adding autocomplete="off" to the form. This prevents Mozilla and IE storing the password (Opera ignores the website’s request), so resisting the above threats, but does it introduce more problems than it solves? By being imposed with the responsibility of remembering his password, the customer might reduce security in order to manage. He could write down the password and keep this near the computer or on his person; this allows secure passwords but is at risk of compromise by those with physical access. Alternatively he might choose a easy to remember, low security password, and/or use the same one on multiple websites, introducing vulnerabilities from electronic attackers.

More topically, autocompletion resists phishing attacks. A form field is autocompleted if it is at the same URL (IE) or same hostname and field name (Mozilla) as when the password was entered. If a potential victim is sent to a phishing site, autocomplete will not trigger, hopefully causing the user to investigate the site more carefully before remembering and entering the password. Rather than making entering a password a reflex action, autocomplete turns it into an exceptional case, allowing and encouraging pause for thought. However this will not happen for banks; all those I was able to test disabled the feature (Halifax, Egg and Lloyds). Does this improve the security, or just allow banks to shift liability onto customers? Is it the result of a carefully performed risk analysis or simple a knee-jerk reaction against a new feature, more the result of folk-wisdom than sense?

Security economics might help answer these questions. A simplistic analysis is that autocompletion resists phishing but increases the risk of malware and fraud by members of a customer’s household. Deciding on the best course of action requires access to detailed fraud statistics, but the banks keep these as closely guarded secrets. Nevertheless, something still can be said about the comparative risk to customers of the above attacks. Anecdotal evidence suggests that fraud through malware attacks is small compared to phishing, so that just leaves intra-household fraud. At least after the fact, phishing can be easy for the customer to deny. He might have the email, and the transactions are typically international. Fraud by members of a household is considerably more difficult to refute; the transactions might be in person, leaving less of an audit trail and are likely to be local. So rationally banks should enable autocompletion, reducing phishing attacks which they have to pay out for and shifting fraud to the household, which they can pass onto customers.

But the banks haven’t done this. Have they just not thought about this, or does the evidence justify their decision? I welcome your comments.

[Thanks to Ross Anderson for his comments on this issue.]

When firmware attacks! (DDoS by D-Link)

Last October I was approached by Poul-Henning Kamp, a self-styled “Unix guru at large”, and one of the FreeBSD developers. One of his interests is precision timekeeping and he runs a stratum 1 timeserver which is located at DIX, the neutral Danish IX (Internet Exchange Point). Because it provides a valuable service (extremely accurate timing) to Danish ISPs, the charges for his hosting at DIX are waived.

Unfortunately, his NTP server has been coming under constant attack by a stream of Network Time Protocol (NTP) time request packets coming from random IP addresses all over the world. These were disrupting the gentle flow of traffic from the 2000 or so genuine systems that were “chiming” against his master system, and also consuming a very great deal of bandwidth. He was very interested in finding out the source of this denial of service attack — and making it stop!
Continue reading When firmware attacks! (DDoS by D-Link)

AV-net – a new solution to the Dining Cryptographers Problem

Last week in the 14th International Workshop on Security Protocols, I presented a talk on the paper: A 2-round Anonymous Veto Protocol (joint work with Piotr Zieliński), which interested some people. The talk was about solving the following crypto puzzle.

In a room where all discussions are public, the Galactic Security Council must decide whether to invade an enemy planet. One delegate wishes to veto the measure, but worries about sanctions from the pro-war faction. This presents a dilemma: how can one anonymously veto the decision?

This veto problem is essentially the same as the Dining Cryptographers Problem first proposed by Chaum in 1988 — how to compute the Boolean-OR securely. However, Chaum’s classic solution, DC-net, assumes unconditionally secure private channels among participants, which don’t exist in our problem setting. Our protocol, Anonymous Veto Network (or AV-net), not only overcomes all the major limitations in DC-net, but also is very efficient in many aspects (probably optimal).

Award winners

Congratulations to Steven J. Murdoch and George Danezis who were recently awarded the Computer Laboratory Lab Ring (the local alumni association) award for the “most notable publication” (that’s notable as in jolly good) for the past year, written by anyone in the whole lab.

Their paper, “Low cost traffic analysis of Tor”, was presented at the 2005 IEEE Symposium on Security and Privacy (Oakland 2005). It demonstrates a feasible attack, within the designer’s threat model, on the anonymity provided by Tor, the second generation onion routing system.

George was recently back in Cambridge for a couple of days (he’s currently a post-doc visiting fellow at the Katholieke Universiteit Leuven) so we took a photo to commemorate the event (see below). As it happens, Steven will be leaving us for a while as well, to work as an intern at Microsoft Research for a few months… one is reminded of the old joke about the Scotsman coming south of the border and thereby increasing the average intelligence of both countries 🙂

George Danezis and Steven J. Murdoch, most notable publication 2006
George Danezis and Steven J. Murdoch, most notable publication 2006

Fraud or feature?

Dual use technologies are everywhere. Myself and colleagues have been presenting Phish and Chips, and the Man-in-the-Middle Defence at the Security Protocols Workshop this week, in which we describe how the EMV protocol suite can be modified in unintended ways, and that a card interceptor can be used for both fraudulent and beneficial activities.

A second example is how the waters in which internet phishermen angle for account details regularly become muddied by the marketing departments of enterprising banks. Every once in a while, these chaps manage to send out genuine emails entreating the user to click on the link in the email, or to navigate to a site not clearly part of the bank’s site, then provide their personal details.

Today I discovered that the same dilemma has been playing out in the fight to secure the fascia of cash machines against the attachment of illicit skimmers. I was off to work promtly this morning, to open up shop for an ITN TV crew doing a piece on Chip and PIN. After cleverly managing to miss my train, I was forced to take a rather expensive taxi ride to Cambridge — so much so that I had to have the taxi stop for me to withdraw some cash. It was then that I spotted this device attached to the slot of the Barclays Bank ATM on White Horse road in Baldock, Hertfordshire.

ATM attachment detail side

Continue reading Fraud or feature?