Call for papers: WEIS 2010 — Submissions due next week

The Workshop on the Economics of Information Security (WEIS) is the leading forum for interdisciplinary scholarship on information security, combining expertise from the fields of economics, social science, business, law, policy and computer science. Prior workshops have explored the role of incentives between attackers and defenders, identified market failures dogging Internet security, and assessed investments in cyber-defense.

The ninth installment of WEIS will take place June 7–8 at Harvard. Submissions are due in one week, February 22, 2010. For more information, see the complete call for papers.

WEIS 2010 will build on past efforts using empirical and analytic tools to not only understand threats, but also strengthen security through novel evaluations of available solutions. How should information risk be modeled given the constraints of rare incidence and high interdependence? How do individuals’ and organizations’ perceptions of privacy and security color their decision making? How can we move towards a more secure information infrastructure and code base while accounting for the incentives of stakeholders?

If you have been working to answer questions such as these, then I encourage you to submit a paper.

What's the Buzz about? Studying user reactions

Google Buzz has been rolled out to 150M Gmail users around the world. In their own words, it’s a service to start conversations and share things with friends. Cynics have said it’s a megalomaniacal attempt to leverage the existing user base to compete with Facebook/Twitter as a social hub. Privacy advocates have rallied sharply around a particular flaw: the path of least-resistance to signing up for Buzz includes automatically following people based on Buzz’s recommendations from email and chat frequency, and this “follower” list is completely public unless you find the well-hidden privacy setting. As a business decision, this makes sense, the only chance for Buzz to make it is if users can get started very quickly. But this is a privacy misstep that a mandatory internal review would have certainly objected to. Email is still a private, personal medium. People email their mistresses, workers email about job opportunities, reporters email anonymous sources all with the same emails they use for everything else. Besides the few embarrassing incidents this will surely cause, it’s fundamentally playing with people’s perceptions of public and private online spaces and actively changing social norms, as my colleague Arvind Narayanan spelled out nicely.

Perhaps more interesting than the pundit’s responses though is the ability to view thousands of user’s reactions to Buzz as they happen. Google’s design philosophy of “give minimal instructions and just let users type things into text boxes and see what happens” preserved a virtual Pompeii of confused users trying to figure out what the new thing was and accidentally broadcasting their thoughts to the entire Internet. If you search Buzz for words like “stupid,” “sucks,” and “hate” the majority of the conversation so far is about Buzz itself. Thoughts are all over the board: confusion, stress, excitement, malaise, anger, pleading. Thousands of users are badly confused by Google’s “follow” and “profile” metaphors. Others are wondering how this service compares to the competition. Many just want the whole thing to go away (leading a few how-to guides) or are blasting Google or blasting others for complaining.

It’s a major data mining and natural language processing challenge to analyze the entire body of reactions to the new service, but the general reaction is widespread disorientation and confusion. In the emerging field of security psychology, the first 48 hours of Buzz posts could provide be a wealth of data about about how people react when their privacy expectations are suddenly shifted by the machinations of Silicon Valley.

Chip and PIN is broken

There should be a 9-minute film on Newsnight tonight (10:30pm, BBC Two) showing some research by Steven Murdoch, Saar Drimer, Mike Bond and me. We demonstrate a middleperson attack on EMV which lets criminals use stolen chip and PIN cards without knowing the PIN.

Our technical paper Chip and PIN is Broken explains how. It has been causing quite a stir as it has circulated the banking industry privately for over 2 months, and it has been accepted for the IEEE Symposium on Security and Privacy, the top conference in computer security. (See also our FAQ and the press release.)

The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.
Continue reading Chip and PIN is broken

New attacks on HMQV

Many people may still remember the debates a few years ago about the HMQV protocol, a modification of MQV with the primary aim of provable security. Various attacks were later discovered for the original HMQV. In the subsequent submission to the IEEE P1363 standards, the HMQV protocol has been revised to address the reported weaknesses.

However, the revised HMQV protocol is still vulnerable. In a paper that I presented at Financial Cryptography ’10, I described two new attacks. The first presents a counterexample to invalidate the basic authentication feature in the protocol. The second is generally applicable to other key exchange protocols, despite that many have formal security proofs.

The first attack is particularly concerning since the formal security proofs failed to detect this basic flaw. The HMQV protocol explicitly specifies that the Certificate Authority (CA) does not need to validate the public key except checking it is not zero. (This is one reason why HMQV claims to be more efficient than MQV). So, the protocol allows the CA to certify a small subgroup element as the user’s “public key”. Then, anyone who knows this “public key” can successfully pass authentication using HMQV (see the paper for details). Note, in this case, a private key doesn’t exit, but the authentication is successful. What is the “authentication” in HMQV based on?

The HMQV author acknowledges this attack, but states it has no bad effects. Although I disagree, this will be up to the reader to decide.

Updates:

  • 2010-03-11: Full version of the paper available here
  • 2010-04-04: My comments on Tang’s paper.

The need for privacy ombudsmen

Facebook is rolling out two new features with privacy implications, an app dashboard and a gaming dashboard. Take a 30 second look at the beta versions which are already live (with real user data) and see if you spot any likely problems. For the non-Facebook users, the new interfaces essentially provide a list of applications that your friends are using, including “Recent Activity” which lists when applications were used. What could possibly go wrong?

Well, some users may use applications they don’t want their friend to know about, like dating or job-search. And they certainly may not want others to know the time they used an application, if this makes it clear that they were playing a game on company time. This isn’t a catastrophic privacy breach, but it will definitely lead to a few embarrassing situations. As I’ve argued before, users should have a basic privacy expectation that if they continue to use a service in a consistent way, data won’t be shared in a new, unexpected manner of which they have no warning or control, and this new feature violates that expectation. The interesting thing is how Facebook is continually caught by surprise when their spiffy new features upset users. They seem equally clueless with their response: allowing developers to opt an application out of appearing on the dashboard. Developers have no incentive to do this, as they want maximum exposure for their apps. A minimally acceptable solution must allow users to opt themselves out.

It’s inexcusable that Facebook doesn’t appear to have a formal privacy testing process to review new features and recommend fixes before they go live. The site is quite complicated, but a small team should be able to identify the issues with something like the new dashboard in a day’s work. It could be effective with with 1% of the manpower of the company’s nudity cops. Notably, Facebook is trying to resolve a class-action lawsuit over their Beacon fiasco by creating an independent privacy foundation, which privacy advocates and users have both objected to. As a better way forward, I’d call for creating an in-house “privacy ombudsmen” team, which has the authority to review new features and publish analysis of them, as a much more direct step to preventing future privacy failures.

Why is 3-D Secure a single sign-on system?

Since the blog post on our paper Verified by Visa and MasterCard SecureCode: or, How Not to Design Authentication, there has been quite a bit of feedback, including media coverage. Generally, commenters have agreed with our conclusions, and there have been some informative contributions giving industry perspectives, including at Finextra.

One question which has appeared a few times is why we called 3-D Secure (3DS) a single sign-on (SSO) system. 3DS certainly wasn’t designed as a SSO system, but I think it meets the key requirement: it allows one party to authenticate another, without credentials (passwords, keys, etc…) being set up in advance. Just like other SSO systems like OpenID and Windows CardSpace, there is some trusted intermediary which both communication parties have a relationship with, who facilitates the authentication process.

For this reason, I think it is fair to classify 3DS as a special-purpose SSO system. Your card number acts as a pseudonym, and the protocol gives the merchant some assurance that the customer is the legitimate controller of that pseudonym. This is a very similar situation to OpenID, which provides the relying party assurance that the visitor is the legitimate controller of a particular URL. On top of this basic functionality, 3DS also gives the merchant assurance that the customer is able to pay for the goods, and provides a mechanism to transfer funds.

People are permitted to have multiple cards, but this does not prevent 3DS from being a SSO system. In fact, it is generally desirable, for privacy purposes, to allow users to have multiple pseudonyms. Existing SSO systems support this in various ways — OpenID lets you have multiple domain names, and Windows CardSpace uses clever cryptography. Another question which came up was whether 3DS was actually transaction authentication, because the issuer does get a description of the transaction. I would argue not, because the transaction description does not go to the customer, thus the protocol is vulnerable to a man-in-the-middle attack if the customer’s PC is compromised.

A separate point is whether it is useful to categorize 3DS as SSO. I would argue yes, because we can then compare 3DS to other SSO systems. For example, OpenID uses the domain name system to produce a hierarchical name space. In contrast, 3DS has a flat numerical namespace and additional intermediaries in the authentication process. Such architectural comparisons between deployed systems are very useful to future designers. In fact, the most significant result the paper presents is one from security-economics: 3DS is inferior in almost every way to the competition, yet succeeded because incentives were aligned. Specifically, the reward for implementing 3DS is the ability to push fraud costs onto someone else — the merchant to the issuer and the issuer to the customer.

Multichannel protocols against relay attacks

Until now it was widely believed that the only defense against relay attacks was distance bounding. In a paper presented today at Financial Cryptography 2010 we introduce a conceptually new approach for detecting and preventing relay attacks, using multichannel protocols.

We have been working on multichannel protocols since 2005. Different channels have different advantages and disadvantages and therefore one may build a better security protocol by combining different channels for different messages in the protocol trace. (For example a radio channel like Bluetooth has high bandwidth, low latency and good usability but leaves you in doubt as to whether the message really came from the announced sender; whereas a visual channel in which you acquire a barcode with a scanner or camera has low bandwidth and poorer usability but gives stronger assurance about where the message came from.)

In this new paper we apply the multichannel paradigm to the problem of countering relay attacks. We introduce a family of protocols in which at least one message is sent over a special “unrelayable” channel. The core idea is that one channel connects the verifier to the principal with whom she shares the prearranged secret K, while another channel (the unrelayable one) connects her to the prover who is actually in front of her; and the men in the middle, however much they relay, can’t get it right on both of these channels simultaneously.

We convey this idea with several stories. Don’t take them too literally but they let us illustrate and discuss all the key security points.

Don't let anyone else reuse this banknote!

This work is exciting for us because it opens up a new field. We look forward to other researchers following it up with implementations of unrelayable channels and with formal tools for the analysis of such protocols.

Frank Stajano, Ford-Long Wong, Bruce Christianson. Multichannel protocols to prevent relay attacks (preliminary; the final revised version of the full paper will be published in Springer LNCS)

How online card security fails

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Update (2010-01-27): There has been some follow-up media coverage

Update (2010-01-28): The New Scientist also has the story, as has Ars Technica.

How hard can it be to measure phishing?

Last Friday I went to a workshop organised by the Oxford Internet Institute on “Mapping and Measuring Cybercrime“. The attendees represented many disciplines from lawyers, through ePolicy, to serving police officers and an ex Government minister. Much of the discussion related to the difficulty of saying precisely what is or is not “cybercrime“, and what might be meant by mapping or measuring it.

The position paper I submitted (one more of the extensive Moore/Clayton canon on phishing) took a step back (though of course we intend to be a step forward), in that it looked at the very rich datasets that we have for phishing and asked whether this meant that we could usefully map or measure that particular criminal speciality?

In practice, we believe, bias in the data and the bias of those who are interpret it means that considerable care is needed to understand what all the data actually means. We give an example from our own work of how failing to understand the bias meant that we initially misunderstood the data, and how various intentional distortions arise because of the self-interest of those who collect the data.

Extrapolating, this all means that getting better data on other types of cybercrime may not prove to be quite as useful as might initially be thought.

As ever, reading the whole paper (it’s only 4 sides!) is highly recommended, but to give a flavour of the problem we’re drawing attention to:

If a phishing gang host their webpages on a thousand fraudulent domains, using fifty stolen credit cards to purchase them from a dozen registrars, and then transfer money out of a hundred customer accounts leading to a monetary loss in six cases: is that a 1000 crimes, or 50, or 12, or 100 or 6 ?

The phishing website removal companies would say that there were 1000 incidents because they need to get 1000 domains suspended. The credit card companies would say there were 50 incidents because 50 cardholders ought to have money reimbursed. Equally they would have 12 registrars to “charge back” because they had accepted fraudulent registrations (there might have been any number of actual credit card money transfer events between 12 and 1000 depending whether the domains were purchased in bulk). The banks will doubtless see the criminality as 100 unauthorised transfers of money out of their customer accounts; but if they claw back almost all of the cash (because it remains within the mainstream banking system) then the six-monthly Financial Fraud Action UK (formerly APACS) report will merely include the monetary losses from the 6 successful thefts.

Clearly, what you count depends on who you are — but crucially, in a world where resources are deployed to meet measurement targets (and your job is at risk if you miss them), deciding what to measure will bias your decisions on what you actually do and hence how effective you are at defeating the criminals.