Category Archives: Banking security

The security of the banking system, as well as hardware and software commonly used in such installations

Why is 3-D Secure a single sign-on system?

Since the blog post on our paper Verified by Visa and MasterCard SecureCode: or, How Not to Design Authentication, there has been quite a bit of feedback, including media coverage. Generally, commenters have agreed with our conclusions, and there have been some informative contributions giving industry perspectives, including at Finextra.

One question which has appeared a few times is why we called 3-D Secure (3DS) a single sign-on (SSO) system. 3DS certainly wasn’t designed as a SSO system, but I think it meets the key requirement: it allows one party to authenticate another, without credentials (passwords, keys, etc…) being set up in advance. Just like other SSO systems like OpenID and Windows CardSpace, there is some trusted intermediary which both communication parties have a relationship with, who facilitates the authentication process.

For this reason, I think it is fair to classify 3DS as a special-purpose SSO system. Your card number acts as a pseudonym, and the protocol gives the merchant some assurance that the customer is the legitimate controller of that pseudonym. This is a very similar situation to OpenID, which provides the relying party assurance that the visitor is the legitimate controller of a particular URL. On top of this basic functionality, 3DS also gives the merchant assurance that the customer is able to pay for the goods, and provides a mechanism to transfer funds.

People are permitted to have multiple cards, but this does not prevent 3DS from being a SSO system. In fact, it is generally desirable, for privacy purposes, to allow users to have multiple pseudonyms. Existing SSO systems support this in various ways — OpenID lets you have multiple domain names, and Windows CardSpace uses clever cryptography. Another question which came up was whether 3DS was actually transaction authentication, because the issuer does get a description of the transaction. I would argue not, because the transaction description does not go to the customer, thus the protocol is vulnerable to a man-in-the-middle attack if the customer’s PC is compromised.

A separate point is whether it is useful to categorize 3DS as SSO. I would argue yes, because we can then compare 3DS to other SSO systems. For example, OpenID uses the domain name system to produce a hierarchical name space. In contrast, 3DS has a flat numerical namespace and additional intermediaries in the authentication process. Such architectural comparisons between deployed systems are very useful to future designers. In fact, the most significant result the paper presents is one from security-economics: 3DS is inferior in almost every way to the competition, yet succeeded because incentives were aligned. Specifically, the reward for implementing 3DS is the ability to push fraud costs onto someone else — the merchant to the issuer and the issuer to the customer.

How online card security fails

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Update (2010-01-27): There has been some follow-up media coverage

Update (2010-01-28): The New Scientist also has the story, as has Ars Technica.

How hard can it be to measure phishing?

Last Friday I went to a workshop organised by the Oxford Internet Institute on “Mapping and Measuring Cybercrime“. The attendees represented many disciplines from lawyers, through ePolicy, to serving police officers and an ex Government minister. Much of the discussion related to the difficulty of saying precisely what is or is not “cybercrime“, and what might be meant by mapping or measuring it.

The position paper I submitted (one more of the extensive Moore/Clayton canon on phishing) took a step back (though of course we intend to be a step forward), in that it looked at the very rich datasets that we have for phishing and asked whether this meant that we could usefully map or measure that particular criminal speciality?

In practice, we believe, bias in the data and the bias of those who are interpret it means that considerable care is needed to understand what all the data actually means. We give an example from our own work of how failing to understand the bias meant that we initially misunderstood the data, and how various intentional distortions arise because of the self-interest of those who collect the data.

Extrapolating, this all means that getting better data on other types of cybercrime may not prove to be quite as useful as might initially be thought.

As ever, reading the whole paper (it’s only 4 sides!) is highly recommended, but to give a flavour of the problem we’re drawing attention to:

If a phishing gang host their webpages on a thousand fraudulent domains, using fifty stolen credit cards to purchase them from a dozen registrars, and then transfer money out of a hundred customer accounts leading to a monetary loss in six cases: is that a 1000 crimes, or 50, or 12, or 100 or 6 ?

The phishing website removal companies would say that there were 1000 incidents because they need to get 1000 domains suspended. The credit card companies would say there were 50 incidents because 50 cardholders ought to have money reimbursed. Equally they would have 12 registrars to “charge back” because they had accepted fraudulent registrations (there might have been any number of actual credit card money transfer events between 12 and 1000 depending whether the domains were purchased in bulk). The banks will doubtless see the criminality as 100 unauthorised transfers of money out of their customer accounts; but if they claw back almost all of the cash (because it remains within the mainstream banking system) then the six-monthly Financial Fraud Action UK (formerly APACS) report will merely include the monetary losses from the 6 successful thefts.

Clearly, what you count depends on who you are — but crucially, in a world where resources are deployed to meet measurement targets (and your job is at risk if you miss them), deciding what to measure will bias your decisions on what you actually do and hence how effective you are at defeating the criminals.

Encoding integers in the EMV protocol

On the 1st of January 2010, many German bank customers found that their banking smart cards had stopped working. Details of why are still unclear, but indications are that the cards believed that the date was 2016, rather than 2010, and so refused to process a transaction supposedly after their expiry dates. This problem could turn out to be quite expensive for the cards’ manufacturer, Gemalto: their shares dropped almost 4%, and they have booked a €10 m charge to handle the consequences.

These cards implement the EMV protocol (the same one used for Chip and PIN in the UK). Here, the card is sent the current date in 3-byte YYMMDD binary-coded decimal (BCD) format, i.e. “100101” on 1 January 2010. If however this was interpreted as hexadecimal, then the card will think the year is 2016 (in hexadecimal, 1 January 2010 should have actually been “0a0101”). Since the numbers 0–9 are the same in both BCD and hexadecimal, we can see why this problem only occurred in 2010*.

In one sense, this looks like a foolish error, and should have been caught in testing. However, before criticizing too harshly, one should remember that EMV is almost impossible to implement perfectly. I have written a fairly complete implementation of the protocol and frequently find edge cases which are insufficiently documented, making dealing with them error-prone. Not only is the specification vague, but it is also long — the first public version in 1996 was 201 pages, and it grew to 765 pages by 2008. Moreover, much of the complexity is unnecessary. In this article I will give just one example of this — the fact that there are nine different ways to encode integers.

Continue reading Encoding integers in the EMV protocol

Relay attack featured on Dutch TV

Yesterday, the Dutch TV programme “Goudzoekers” featured Saar Drimer and me demonstrating a relay attack against the recently introduced Chip and PIN system in The Netherlands. The video can be found online, in both Windows Media or Silverlight formats as well as Flash below. The production team have published a synopsis (translated version) on their blog, and today there have been some follow-ups in the press, for example De Telegraaf (translated version).

Continue reading Relay attack featured on Dutch TV

Interview with Steven Murdoch on Finextra

Today, Finextra (a financial technology news website), has published a video interview with me, discussing my research on banks using card readers for online banking, which was recently featured on TV.

In this interview, I discuss some of the more technical aspects of the attacks on card readers, including the one demonstrated on TV (which requires compromising a Chip & PIN terminal), as well as others which instead require that the victim’s PC be compromised, but which can be carried out on a larger scale.

I also compare the approaches taken by the banking community to protocol design, with that of the Internet community. Financial organizations typically develop protocols internally, and so are subject to public scrutiny late in deployment, if at all. This is in contrast with Internet protocols which are commonly first discussed within industry and academia, then the specification is made public, and only then is it implemented. As a consequence, vulnerabilities in banking security systems are often more expensive to fix.

Also, I discuss some of the non-technical design decisions involved in the deployment of security technology. Specifically, their design needs to take into account risk analysis, psychology and usability, not just cryptography. Organizational structures also need to incentivize security; groups who design security mechanisms should be responsible for failure. Organizational structures should also discourage knowledge of security failings from being hidden from management. If necessary a separate penetration testing team should report directly to board level.

Finally I mention one good design principle for security protocols: “make everything as simple as possible, but not simpler”.

The video (7 minutes) can be found below, and is also on the Finextra website.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

Tuning in to random numbers

Tomorrow at Cryptographic Hardware and Embedded Systems 2009 I’m going to be presenting a frequency injection attack on random number generators formed from ring oscillators.

Random numbers are a vital part of cryptography — if predictable numbers are being used an attacker may be able to read secret messages, impersonate either party, or replay transactions. In addition, many countermeasures to attacks such as Differential Power Analysis involve adding randomness to operations — without the randomness algorithms such as RSA become susceptible.

To create unpredictable random numbers in a predictable computer involves measuring some kind of physical process. Examples include circuit noise, radioactive decay and timing variations. One method commonly used in low-cost circuits such as smartcards is measuring the jitter from free-running ring oscillators. The ring oscillators’ frequencies depend on environmental factors such as voltage and temperature, and by having many independent ring oscillators we can harvest small timing differences between them (jitter).

But what happens if they aren’t independent? In particular, what happens if the circuit is faced with an attacker who can manipulate the outside of the system?

The attack turns out to be fairly straightforward. An effect called injection locking, known since 1665, considers what happens if you have two oscillators very lightly connected. For example, two pendulum clocks mounted on a wall tend to synchronise the swing of their pendula through small vibrations transmitted through the wall.

In an electronic circuit, the attacker can inject a signal to force the ring oscillators to injection-lock. The simplest way involves forcing a frequency onto the power supply from which the ring oscillators are powered. If there are any imbalances in the circuit we suggest that this causes the circuit to ring to be more susceptible at that point to injection locking. So we examined the effects of power supply injection, and can envisage a similar attack by irradiation with electromagnetic fields.

And it works surprisingly well. We tried an old version of a secure microcontroller that has been used in banking ATMs (and is still recommended for new ones). For the 32 random bits that are used in an ATM transaction, we managed to reduce the number of possibilities from 4 billion to about 225.

So if an attacker can have access to your card and PIN, in a modified shop terminal for example, he can record some ATM transactions. Then he needs to take a fake card to the ATM containing this microcontroller. On average he’ll need to record 15 transactions (the square root of 225) on the card and try 15 transactions at the ATM before he can steal the money. This number may be small enough not to set off alarms at the bank. The customer’s card and PIN were used for the transaction, but at a time when he was nowhere near an ATM.

While we looked at power supply injection, the ATM could also be attacked electromagnetically. Park a car next to the ATM emitting a 10 GHz signal amplitude modulated by the ATM’s vulnerable frequency (1.8 MHz in our example). The 10 GHz will penetrate the ventilation slots but then be filtered away, leaving 1.8 MHz in the power supply. When the car drives away there’s no evidence that the random numbers were bad – and bad random numbers are very difficult to detect anyway.

We also tried the same attack on an EMV (‘Chip and PIN’) bank card. Before injection, the card failed only one of the 188 tests in the standard NIST suite for random number testing. With injection it failed 160 of 188. While we can’t completely predict the random number generator, there are some sequences that can be seen.

So, as ever, designing good random number generators turns out to be a hard problem not least because the attacker can tamper with your system in more ways than you might expect.

You can find the paper and slides on my website.

Which? survey of online banking security

Today Which? released their survey of online banking security. The results are summarized in their press release and the full article is in the September edition of “Which? Computing”.

The article found that there was substantial variation in what authentication measures UK banks used. Some used normal password fields, some used drop-down boxes, and some used a CAP smart card reader. All of these are vulnerable to attack by a sophisticated criminal (see for example our paper on CAP), but the article argued that it is better to force attackers to work harder to break into a customer’s account. Whether this approach would actually decrease fraud is an interesting question. Intuitively it makes sense, but it might just succeed in putting the manufacturers of unsophisticated malware out of business, and the criminals actually performing the fraud would just buy a smarter kit.

However, what I found most interesting were the responses from the banks whose sites were surveyed.

Barclays (which came top due to their use of CAP) were pleased:

“We believe our customers have the best security packages of all online banks to protect them and their money.”

In contrast, Halifax (who came bottom) didn’t like the survey saying:

“Any meaningful assessment of a bank’s fraud prevention tools needs to fully examine all systems whether they can be seen directly by customers or not and we would never release details of these systems to any third party.”

I suppose it is unsurprising that the banks which came top were happier with the results than those which came bottom, but to a certain extent I sympathize with Halifax. They are correct in saying that back-end controls (e.g. spotting suspicious transactions and reversing fraudulent ones) are very important tools at preventing fraud. I think the article is clear on this point, always saying that they are comparing “customer-facing” or “visible” security measures and including a section describing the limitations of the study.

However, I think this complaint indicates a deeper problem with consumer banking: customers have no way to tell which bank will better protect their money. About the only figure the banks offered was HSBC saying they were better than average. Fraud figures for individual banks do exist (APACS collects them), and they are shared between the banks, but they are withheld from customers and shareholders. So I don’t think it is surprising that consumer groups are comparing the only thing they can.

I can understand the reluctance in publishing fraud figures — it makes customers think their money is not safe, and no bank wants to be at the bottom. However, I do think it would be in the long-term best interests of everyone if there could be meaningful comparison of banks in terms of security. Customers can compare their safety while driving and while in hospital, but why not when they bank online?

So while I admit there are problems with the Which? report, I do think it is a step in the right direction. They are joining a growing group of security professionals who are calling for better data on security breaches. Which? were also behind the survey which found that 20% of fraud victims don’t get their money back, and a campaign to get better statistics on complaints against banks. I wish them luck in their efforts.

Defending against wedge attacks in Chip & PIN

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

Continue reading Defending against wedge attacks in Chip & PIN