Monthly Archives: February 2010

Reliability of Chip & PIN evidence in banking disputes

It has now been two weeks since we published our paper “Chip and PIN is broken”. Here, we presented the no-PIN attack, which allows criminals to use a stolen Chip and PIN card, without having to know its PIN. The paper has triggered a considerable amount of discussion, on Light Blue Touchpaper, Finextra, and elsewhere.

One of the topics which has come up is the effect of the no-PIN vulnerability on the consideration of evidence in disputed card transactions. Importantly, we showed that a merchant till-receipt which shows “PIN verified” cannot be relied upon, because this message will appear should the attack we presented be executed, even though the wrong PIN was entered.

On this point, the spokesperson for the banking trade body, the UK Cards Association (formerly known as APACS) stated:

“Finally the issuer would not review a suspected fraud involving a PIN and make a decision based on the customer’s paper receipt stating that the transaction was “PIN verified”, as suggested by Cambridge.”

Unfortunately card issuers do precisely this, as shown in a recent dispute over £9,500 worth of point-of-sale transactions, between American Express and a customer. In their letter to the Financial Ombudsman Service, American Express presented the till receipt as the sole evidence that the PIN was correctly entered:

“We also requested at the time of this claim, supporting documents from [the merchant] and were provided a copy of the till receipts confirming these charges were verified with the PIN.”

Requests to American Express for the audit logs that include the CVR (card verification results), which would have shown whether or not the no-PIN attack had been used, were denied. The ombudsman nevertheless decided against the customer.

The issue of evidence in disputed transaction cases is complex, and wider than questions raised by just the no-PIN attack. To help bring some clarity, I wrote an article, “Reliability of Chip & PIN evidence in banking disputes”, for the 2009 issue of the Digital Evidence and Electronic Signature Law Review, a law journal. This article was written for a legal audience, but would also be suitable for other non-technical readers. It is now available online (PDF 221 kB).

In this article, I give an introduction to payment card security, both Chip & PIN and its predecessors. Then, it includes a high-level description of the EMV protocol which underlies Chip & PIN, with an emphasis on the evidence it generates. A summary of various payment card security vulnerabilities is given, and how their exploitation might be detected. Finally, I discuss methods for collecting and analyzing evidence, along with difficulties currently faced by customers disputing transactions.

Opting out of health data collection

The Government is rolling out a system – the Summary Care Record or SCR – which will make summaries of medical records available to hundreds of thousands of NHS staff in England. Ministers say it will facilitate emergency and unscheduled care, but the evidence in favour of such systems is slight. It won’t be available abroad (or even in Scotland) so if you are allergic to penicillin you’d better keep on wearing your dogtag. But the privacy risk is clear; a similar system in Scotland was quickly abused. Colleagues and I criticised the SCR in Database State, a report we wrote on how government systems infringe human rights.

Doctors have acted at last. The SCR is being rolled out across London, and the Local Medical Committees there have produced a poster and an opt-out leaflet for doctors to use in their waiting rooms. The SCR is also political: while Labour backs it, the Conservatives and the Lib Dems oppose it. Its roll-out means that millions of leaflets will be distributed to voters, pardon me, patients in London extolling its virtues. A cynic might ask whether this is a suitable use of public funds during an election campaign.

Measuring Typosquatting Perpetrators and Funders

For more than a decade, aggressive website registrants have been engaged in ‘typosquatting’ — the intentional registration of misspellings of popular website addresses. Uses for the diverted traffic have evolved over time, ranging from hosting sexually-explicit content to phishing. Several countermeasures have been implemented, including outlawing the practice and developing policies for resolving disputes. Despite these efforts, typosquatting remains rife.

But just how prevalent is typosquatting today, and why is it so pervasive? Ben Edelman and I set out to answer these very questions. In Measuring the Perpetrators and Funders of Typosquatting (appearing at the Financial Cryptography conference), we estimate that at least 938,000 typosquatting domains target the top 3,264 .com sites, and we crawl more than 285,000 of these domains to analyze their revenue sources.
Continue reading Measuring Typosquatting Perpetrators and Funders

Call for papers: WEIS 2010 — Submissions due next week

The Workshop on the Economics of Information Security (WEIS) is the leading forum for interdisciplinary scholarship on information security, combining expertise from the fields of economics, social science, business, law, policy and computer science. Prior workshops have explored the role of incentives between attackers and defenders, identified market failures dogging Internet security, and assessed investments in cyber-defense.

The ninth installment of WEIS will take place June 7–8 at Harvard. Submissions are due in one week, February 22, 2010. For more information, see the complete call for papers.

WEIS 2010 will build on past efforts using empirical and analytic tools to not only understand threats, but also strengthen security through novel evaluations of available solutions. How should information risk be modeled given the constraints of rare incidence and high interdependence? How do individuals’ and organizations’ perceptions of privacy and security color their decision making? How can we move towards a more secure information infrastructure and code base while accounting for the incentives of stakeholders?

If you have been working to answer questions such as these, then I encourage you to submit a paper.

What's the Buzz about? Studying user reactions

Google Buzz has been rolled out to 150M Gmail users around the world. In their own words, it’s a service to start conversations and share things with friends. Cynics have said it’s a megalomaniacal attempt to leverage the existing user base to compete with Facebook/Twitter as a social hub. Privacy advocates have rallied sharply around a particular flaw: the path of least-resistance to signing up for Buzz includes automatically following people based on Buzz’s recommendations from email and chat frequency, and this “follower” list is completely public unless you find the well-hidden privacy setting. As a business decision, this makes sense, the only chance for Buzz to make it is if users can get started very quickly. But this is a privacy misstep that a mandatory internal review would have certainly objected to. Email is still a private, personal medium. People email their mistresses, workers email about job opportunities, reporters email anonymous sources all with the same emails they use for everything else. Besides the few embarrassing incidents this will surely cause, it’s fundamentally playing with people’s perceptions of public and private online spaces and actively changing social norms, as my colleague Arvind Narayanan spelled out nicely.

Perhaps more interesting than the pundit’s responses though is the ability to view thousands of user’s reactions to Buzz as they happen. Google’s design philosophy of “give minimal instructions and just let users type things into text boxes and see what happens” preserved a virtual Pompeii of confused users trying to figure out what the new thing was and accidentally broadcasting their thoughts to the entire Internet. If you search Buzz for words like “stupid,” “sucks,” and “hate” the majority of the conversation so far is about Buzz itself. Thoughts are all over the board: confusion, stress, excitement, malaise, anger, pleading. Thousands of users are badly confused by Google’s “follow” and “profile” metaphors. Others are wondering how this service compares to the competition. Many just want the whole thing to go away (leading a few how-to guides) or are blasting Google or blasting others for complaining.

It’s a major data mining and natural language processing challenge to analyze the entire body of reactions to the new service, but the general reaction is widespread disorientation and confusion. In the emerging field of security psychology, the first 48 hours of Buzz posts could provide be a wealth of data about about how people react when their privacy expectations are suddenly shifted by the machinations of Silicon Valley.

Chip and PIN is broken

There should be a 9-minute film on Newsnight tonight (10:30pm, BBC Two) showing some research by Steven Murdoch, Saar Drimer, Mike Bond and me. We demonstrate a middleperson attack on EMV which lets criminals use stolen chip and PIN cards without knowing the PIN.

Our technical paper Chip and PIN is Broken explains how. It has been causing quite a stir as it has circulated the banking industry privately for over 2 months, and it has been accepted for the IEEE Symposium on Security and Privacy, the top conference in computer security. (See also our FAQ and the press release.)

The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.
Continue reading Chip and PIN is broken

New attacks on HMQV

Many people may still remember the debates a few years ago about the HMQV protocol, a modification of MQV with the primary aim of provable security. Various attacks were later discovered for the original HMQV. In the subsequent submission to the IEEE P1363 standards, the HMQV protocol has been revised to address the reported weaknesses.

However, the revised HMQV protocol is still vulnerable. In a paper that I presented at Financial Cryptography ’10, I described two new attacks. The first presents a counterexample to invalidate the basic authentication feature in the protocol. The second is generally applicable to other key exchange protocols, despite that many have formal security proofs.

The first attack is particularly concerning since the formal security proofs failed to detect this basic flaw. The HMQV protocol explicitly specifies that the Certificate Authority (CA) does not need to validate the public key except checking it is not zero. (This is one reason why HMQV claims to be more efficient than MQV). So, the protocol allows the CA to certify a small subgroup element as the user’s “public key”. Then, anyone who knows this “public key” can successfully pass authentication using HMQV (see the paper for details). Note, in this case, a private key doesn’t exit, but the authentication is successful. What is the “authentication” in HMQV based on?

The HMQV author acknowledges this attack, but states it has no bad effects. Although I disagree, this will be up to the reader to decide.

Updates:

  • 2010-03-11: Full version of the paper available here
  • 2010-04-04: My comments on Tang’s paper.

The need for privacy ombudsmen

Facebook is rolling out two new features with privacy implications, an app dashboard and a gaming dashboard. Take a 30 second look at the beta versions which are already live (with real user data) and see if you spot any likely problems. For the non-Facebook users, the new interfaces essentially provide a list of applications that your friends are using, including “Recent Activity” which lists when applications were used. What could possibly go wrong?

Well, some users may use applications they don’t want their friend to know about, like dating or job-search. And they certainly may not want others to know the time they used an application, if this makes it clear that they were playing a game on company time. This isn’t a catastrophic privacy breach, but it will definitely lead to a few embarrassing situations. As I’ve argued before, users should have a basic privacy expectation that if they continue to use a service in a consistent way, data won’t be shared in a new, unexpected manner of which they have no warning or control, and this new feature violates that expectation. The interesting thing is how Facebook is continually caught by surprise when their spiffy new features upset users. They seem equally clueless with their response: allowing developers to opt an application out of appearing on the dashboard. Developers have no incentive to do this, as they want maximum exposure for their apps. A minimally acceptable solution must allow users to opt themselves out.

It’s inexcusable that Facebook doesn’t appear to have a formal privacy testing process to review new features and recommend fixes before they go live. The site is quite complicated, but a small team should be able to identify the issues with something like the new dashboard in a day’s work. It could be effective with with 1% of the manpower of the company’s nudity cops. Notably, Facebook is trying to resolve a class-action lawsuit over their Beacon fiasco by creating an independent privacy foundation, which privacy advocates and users have both objected to. As a better way forward, I’d call for creating an in-house “privacy ombudsmen” team, which has the authority to review new features and publish analysis of them, as a much more direct step to preventing future privacy failures.