All posts by Steven J. Murdoch

About Steven J. Murdoch

I am Professor of Security Engineering and Royal Society University Research Fellow in the Information Security Research Group of the Department of Computer Science at University College London (UCL), and a member of the UCL Academic Centre of Excellence in Cyber Security Research. I am also a bye-fellow of Christ’s College, Innovation Security Architect at the OneSpan, Cambridge, a member of the Tor Project, and a Fellow of the IET and BCS. I teach on the UCL MSc in Information Security. Further information and my papers on information security research is on my personal website. I also blog about information security research and policy on Bentham's Gaze.

Why is 3-D Secure a single sign-on system?

Since the blog post on our paper Verified by Visa and MasterCard SecureCode: or, How Not to Design Authentication, there has been quite a bit of feedback, including media coverage. Generally, commenters have agreed with our conclusions, and there have been some informative contributions giving industry perspectives, including at Finextra.

One question which has appeared a few times is why we called 3-D Secure (3DS) a single sign-on (SSO) system. 3DS certainly wasn’t designed as a SSO system, but I think it meets the key requirement: it allows one party to authenticate another, without credentials (passwords, keys, etc…) being set up in advance. Just like other SSO systems like OpenID and Windows CardSpace, there is some trusted intermediary which both communication parties have a relationship with, who facilitates the authentication process.

For this reason, I think it is fair to classify 3DS as a special-purpose SSO system. Your card number acts as a pseudonym, and the protocol gives the merchant some assurance that the customer is the legitimate controller of that pseudonym. This is a very similar situation to OpenID, which provides the relying party assurance that the visitor is the legitimate controller of a particular URL. On top of this basic functionality, 3DS also gives the merchant assurance that the customer is able to pay for the goods, and provides a mechanism to transfer funds.

People are permitted to have multiple cards, but this does not prevent 3DS from being a SSO system. In fact, it is generally desirable, for privacy purposes, to allow users to have multiple pseudonyms. Existing SSO systems support this in various ways — OpenID lets you have multiple domain names, and Windows CardSpace uses clever cryptography. Another question which came up was whether 3DS was actually transaction authentication, because the issuer does get a description of the transaction. I would argue not, because the transaction description does not go to the customer, thus the protocol is vulnerable to a man-in-the-middle attack if the customer’s PC is compromised.

A separate point is whether it is useful to categorize 3DS as SSO. I would argue yes, because we can then compare 3DS to other SSO systems. For example, OpenID uses the domain name system to produce a hierarchical name space. In contrast, 3DS has a flat numerical namespace and additional intermediaries in the authentication process. Such architectural comparisons between deployed systems are very useful to future designers. In fact, the most significant result the paper presents is one from security-economics: 3DS is inferior in almost every way to the competition, yet succeeded because incentives were aligned. Specifically, the reward for implementing 3DS is the ability to push fraud costs onto someone else — the merchant to the issuer and the issuer to the customer.

Encoding integers in the EMV protocol

On the 1st of January 2010, many German bank customers found that their banking smart cards had stopped working. Details of why are still unclear, but indications are that the cards believed that the date was 2016, rather than 2010, and so refused to process a transaction supposedly after their expiry dates. This problem could turn out to be quite expensive for the cards’ manufacturer, Gemalto: their shares dropped almost 4%, and they have booked a €10 m charge to handle the consequences.

These cards implement the EMV protocol (the same one used for Chip and PIN in the UK). Here, the card is sent the current date in 3-byte YYMMDD binary-coded decimal (BCD) format, i.e. “100101” on 1 January 2010. If however this was interpreted as hexadecimal, then the card will think the year is 2016 (in hexadecimal, 1 January 2010 should have actually been “0a0101”). Since the numbers 0–9 are the same in both BCD and hexadecimal, we can see why this problem only occurred in 2010*.

In one sense, this looks like a foolish error, and should have been caught in testing. However, before criticizing too harshly, one should remember that EMV is almost impossible to implement perfectly. I have written a fairly complete implementation of the protocol and frequently find edge cases which are insufficiently documented, making dealing with them error-prone. Not only is the specification vague, but it is also long — the first public version in 1996 was 201 pages, and it grew to 765 pages by 2008. Moreover, much of the complexity is unnecessary. In this article I will give just one example of this — the fact that there are nine different ways to encode integers.

Continue reading Encoding integers in the EMV protocol

Relay attack featured on Dutch TV

Yesterday, the Dutch TV programme “Goudzoekers” featured Saar Drimer and me demonstrating a relay attack against the recently introduced Chip and PIN system in The Netherlands. The video can be found online, in both Windows Media or Silverlight formats as well as Flash below. The production team have published a synopsis (translated version) on their blog, and today there have been some follow-ups in the press, for example De Telegraaf (translated version).

Continue reading Relay attack featured on Dutch TV

Interview with Steven Murdoch on Finextra

Today, Finextra (a financial technology news website), has published a video interview with me, discussing my research on banks using card readers for online banking, which was recently featured on TV.

In this interview, I discuss some of the more technical aspects of the attacks on card readers, including the one demonstrated on TV (which requires compromising a Chip & PIN terminal), as well as others which instead require that the victim’s PC be compromised, but which can be carried out on a larger scale.

I also compare the approaches taken by the banking community to protocol design, with that of the Internet community. Financial organizations typically develop protocols internally, and so are subject to public scrutiny late in deployment, if at all. This is in contrast with Internet protocols which are commonly first discussed within industry and academia, then the specification is made public, and only then is it implemented. As a consequence, vulnerabilities in banking security systems are often more expensive to fix.

Also, I discuss some of the non-technical design decisions involved in the deployment of security technology. Specifically, their design needs to take into account risk analysis, psychology and usability, not just cryptography. Organizational structures also need to incentivize security; groups who design security mechanisms should be responsible for failure. Organizational structures should also discourage knowledge of security failings from being hidden from management. If necessary a separate penetration testing team should report directly to board level.

Finally I mention one good design principle for security protocols: “make everything as simple as possible, but not simpler”.

The video (7 minutes) can be found below, and is also on the Finextra website.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

Which? survey of online banking security

Today Which? released their survey of online banking security. The results are summarized in their press release and the full article is in the September edition of “Which? Computing”.

The article found that there was substantial variation in what authentication measures UK banks used. Some used normal password fields, some used drop-down boxes, and some used a CAP smart card reader. All of these are vulnerable to attack by a sophisticated criminal (see for example our paper on CAP), but the article argued that it is better to force attackers to work harder to break into a customer’s account. Whether this approach would actually decrease fraud is an interesting question. Intuitively it makes sense, but it might just succeed in putting the manufacturers of unsophisticated malware out of business, and the criminals actually performing the fraud would just buy a smarter kit.

However, what I found most interesting were the responses from the banks whose sites were surveyed.

Barclays (which came top due to their use of CAP) were pleased:

“We believe our customers have the best security packages of all online banks to protect them and their money.”

In contrast, Halifax (who came bottom) didn’t like the survey saying:

“Any meaningful assessment of a bank’s fraud prevention tools needs to fully examine all systems whether they can be seen directly by customers or not and we would never release details of these systems to any third party.”

I suppose it is unsurprising that the banks which came top were happier with the results than those which came bottom, but to a certain extent I sympathize with Halifax. They are correct in saying that back-end controls (e.g. spotting suspicious transactions and reversing fraudulent ones) are very important tools at preventing fraud. I think the article is clear on this point, always saying that they are comparing “customer-facing” or “visible” security measures and including a section describing the limitations of the study.

However, I think this complaint indicates a deeper problem with consumer banking: customers have no way to tell which bank will better protect their money. About the only figure the banks offered was HSBC saying they were better than average. Fraud figures for individual banks do exist (APACS collects them), and they are shared between the banks, but they are withheld from customers and shareholders. So I don’t think it is surprising that consumer groups are comparing the only thing they can.

I can understand the reluctance in publishing fraud figures — it makes customers think their money is not safe, and no bank wants to be at the bottom. However, I do think it would be in the long-term best interests of everyone if there could be meaningful comparison of banks in terms of security. Customers can compare their safety while driving and while in hospital, but why not when they bank online?

So while I admit there are problems with the Which? report, I do think it is a step in the right direction. They are joining a growing group of security professionals who are calling for better data on security breaches. Which? were also behind the survey which found that 20% of fraud victims don’t get their money back, and a campaign to get better statistics on complaints against banks. I wish them luck in their efforts.

Defending against wedge attacks in Chip & PIN

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

Continue reading Defending against wedge attacks in Chip & PIN

Reducing interruptions with screentimelock

Sometimes I find that I need to concentrate, but there are too many distractions. Emails, IRC, and Twitter are very useful, but also create interruptions. For some types of task this is not a problem, but for others the time it takes to get back to being productive after an interruption is substantial. Or sometimes there is an imminent and important deadline and it is desirable to avoid being sidetracked.

Self-discipline is one approach for these situations, but sometimes it’s not enough. So I wrote a simple Python script — screentimelock — for screen which locks the terminal for a period of time. I don’t need to use this often, but since my email, IRC, and Twitter clients all reside in a screen session, I find it works well for me,

The script is started by screen’s lockscreen command, which is by default invoked by Ctrl-A X. Then, the screen will be cleared, which is helpful as often I find that just seeing the email subject lines is enough to act as a distraction. The screen will remain cleared and the terminal locked, until the next hour (e.g. if the script is activated at 7:15, it will unlock at 8:00).

It is of course possible to bypass the lock. Ctrl-C is ignored, but logging in from a different location and either killing the script or re-attaching the screen will work. Still, this is far more effort than glancing at the terminal, so I find the speed-bump screentimelock provides is enough to avoid temptation.

I’m releasing this software, under the BSD license, in the hope that other people find it useful. The download link, installation instructions and configuration parameters can be found on the screentimelock homepage. Any comments would be appreciated, but despite Zawinski’s Law, this program will not be extended to support reading mail 🙂

EFF and Tor Project in Google Summer of Code

The EFF and the Tor Project have been accepted into Google Summer of Code. This programme offers students a stipend for contributing to open source software over a 3 month period. Google Summer of Code has been running since 2005 and the Tor project has been a participant since 2007.

We are looking for talented and motivated students to work on a number of projects to improve Tor, and related applications. Students are also welcome to come up with their own ideas. Applications must be submitted by 3 April 2009. For further information, and details on how to apply, see the Tor blog.

Hot Topics in Privacy Enhancing Technologies (HotPETs 2009)

HotPETs – the 2nd Hot Topics in Privacy Enhancing Technologies (co-located with PETS) will be held in Seattle, 5–7 August 2009.

HotPETs is the forum for new ideas on privacy, anonymity, censorship resistance, and related topics. Work-in-progress is welcomed, and the format of the workshop will be to encourage feedback and discussion. Submissions are especially encouraged on the human side of privacy: what do people believe about privacy? How does privacy work in existing institutions?

Papers (up to 15 pages) are due by 8 May 2009. Further information can be found in the call for papers.