Author Archive

Dec 22, '08

People often ask me what can they do to prevent themselves from being victims of card fraud when they pay with their cards at shops or use them in ATMs (for on-line card fraud tips see e-victims.org, for example). My short answer is usually “not much, except checking your statements and reporting anomalies to the bank”. This post is the longer answer — little practical things, some a bit over the top, I admit — that cardholders can do to decrease the risk of falling victim to card fraud. (Some of these will only apply to UK issued cards, some to all smartcards, and the rest applies to all types of cards.)

Practical:

1. If you have a UK EMV card, ask the bank to send you a new card if it was issued before the first quarter of 2008. APACS has said that cards issued from January 2008 have an iCVV (‘integrated circuit card verification value‘) in the chip that isn’t the same as the one on the magnetic stripe (CVV1). This means that if the magstripe data was read off the chip (it’s there for fallback) and written onto a blank magstripe card, it shouldn’t — if iCVVs are indeed checked — work at ATMs anywhere. The bad news is that in February 2008 only two out of four newly minted cards that we tested had iCVV, though today your chances may be better.

A PIN entry device taped together2. In places that you are able to pick up the PIN entry device (PED), do it (Sainsbury’s actually encourages this). Firstly, it may allow you to hide your PIN from the people behind you in the queue. Secondly, it allows you to give it a cursory inspection: if there is more than one wire coming out from the back, or the thing falls apart, you shouldn’t use it. (In the picture on the right you see a mounted PED at a high-street shop that is crudely taped together.) In addition, be suspicious of PEDs that are mounted in an irregular way such that you can’t move or comfortably use them; this may indicate that the merchant has a very good camera angle on the keypad, and if you move the PED, it may get out of focus. Of course, some stores mount their PEDs such that they can’t be moved, so you’ll have to use your judgment.

(more…)

May 21, '08

In February, Steven Murdoch, Ross Anderson and I reported our findings on system-level failures of widely deployed PIN Entry Devices (PED) and the Chip and PIN scheme as a whole. Steven is in Oakland presenting the work described in our paper at the IEEE Symposium on Security and Privacy (slides).

We are very pleased that we are the recipients of the new “Most Practical Paper” award of the conference, given to “the paper most likely to immediately improve the security of current environments and systems”. Thanks to everyone who supported this work!

IEEE Security & Privacy Magazine Award

Feb 26, '08

Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.

We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.

Ingenico attack Dione attack

In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.

We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.

This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.

We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.

The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.

Update 1 (2008-03-09): The segment of Newsnight featuring our contribution has been posted to Google Video.

Update 2 (2008-03-21): If the link above doesn’t work try YouTube: part1 and part 2.

Sep 24, '07

For a while I have been looking very closely at how FPGA cores are distributed (the common term is “IP cores”, or just “IP”, but I try to minimize the use of this over-used catch-all catch phrase). In what I hope to be a series of posts, I will mostly discuss The problem (rather than solutions), as I think that that needs to be addressed and adequately defined first. I’ll start with my attempt at a concise definitions of the following:

FPGA: Field Programmable Gate Arrays are generic semiconductor devices comprising of interconnected functional blocks that can be programmed, and reprogrammed, to perform user-described logic functions.

Cores: ready-made functional descriptions that allow system developers to save on design cost and time by purchasing them from third parties and integrating them into their own design.

The “cores distribution problem” is easy to define, but challenging to solve: how can a digital design be distributed by its designer such that he can a) enable his customer to evaluate, simulate, and integrate it into its own, b) limit the amount of instances that can be made of it, and c) make it run only on specific devices. If this sounds like “Digital Rights Management” to you, that’s exactly what it is: DRM for FPGAs. Despite the abuse of some industries that made a bad name for DRM, in our application there may be benefits for both the design owner and the end user. We also know that enabling the three conditions above for a whole industry is challenging, and we are not even close to a solution.

(more…)

Sep 15, '07

On a recent visit to a local supermarket I noticed something new being displayed on the keypad before the transaction starts:

Did you know that you can remove the PIN pad to enter your PIN?

(“Did you know that you can remove the PIN pad to enter your PIN?”)

Picking up the keypad will allow the cardholder to align it such that bystanders, or the merchant, cannot observe the PIN as it is entered. On the one hand, this seems sensible (if we assume that the only way to get the PIN is by observation, no cameras are present, and that even more cardholder liability is the solution for card fraud). On the other hand, it also makes some attacks easier. For example, the relay attack we demonstrated earlier this year, where the crook inserts a modified card into the terminal, hoping that the merchant does not ask to examine it. Allowing the cardholder to move the keypad separates the merchant, who could detect the attack, from the transaction. Can I now hide the terminal under my jacket while the transaction is processed? Can I turn my back to the merchant? What if I found a way to tamper with the terminal? Clearly, this would make the process easier for me. We’ve been doing some more work on payment terminals and will hopefully have some more to say about it soon.

(more…)

Sep 2, '07

A project called NSA@home has been making the rounds. It’s a gem. Stanislaw Skowronek got some old HDTV hardware off of eBay, and managed to create himself a pre-image brute force attack machine against SHA-1. The claim is that it can find a pre-image for an 8 character password hash from a 64 character set in about 24 hours.

The key here is that this hardware board uses 15 field programmable gate arrays (FPGAs), which are generic integrated circuits that can perform any logic function within their size limit. So, Stanislaw reverse engineered the connections between the FPGAs, wrote his own designs and now has a very powerful processing unit. FPGAs are better at specific tasks compared to general purpose CPUs, especially for functions that can be divided into many independently-running smaller chunks operating in parallel. Some cryptographic functions are a perfect match; our own Richard Clayton and Mike Bond attacked the DES implementation in the IBM 4758 hardware security module using an FPGA prototyping board; DES was attacked on the FPGA-based custom hardware platform, the Transmogrifier 2a; more recently, the purpose-built COPACOBANA machine which uses 120 low-end FPGAs operating in parallel to break DES in about 7 days; a proprietary stream cipher on RFID tokens was attacked using 16 commercial FPGA boards operating in parallel; and finally, people are now in the midst of cracking the A5 stream cipher in real time using commercial FPGA modules. The unique development we see with NSA@home is that it uses a defunct piece of hardware.

(more…)

May 21, '07

Steven Murdoch and I have previously discussed issues concerning the tamper resistance of payment terminals and the susceptibility of Chip & PIN to relay attacks. Basically, the tamper resistance protects the banks but not the customers, who are left to trust any of the devices they provide their card and PIN to (the hundreds of different types of terminals do not help here). The problem some customers face is that when fraud happens, they are the ones being blamed for negligence instead of the banks owning up to a faulty system. Exacerbating the problem is the impossibility of customers to prove they have not been negligent with their secrets without the proper data that the banks have, but refuse to hand out.

(more…)

Dec 24, '06

Many discussions over the security of Chip & PIN have focused on the tamper-resistance of terminals (for example in the aftermath of the Shell Chip & PIN fraud). It is important to remember, however, that even perfect tamper resistance only ensures that the terminal will no longer be able to communicate with the bank once opened. It does not prevent anyone from replacing most of the terminal’s hardware and presenting it to customers as legitimate, so freely collecting card details and PINs.

Steven Murdoch and myself took the chassis of a real terminal and replaced much of the internal electronics such that it allows us to control the screen, keypad and card-reader. Steven suggested that in order to show that it is completely under our control, we should make it play Tetris (similarly to the guys who made a voting machine play chess). We recorded a short video showing our Tetris playing terminal in action. Have a merry Christmas and happy New Year :-)

Update (2007-01-03): The video is now on YouTube.

Update (2007-01-05): The Association for Payment Clearing Services
(APACS) has responded:

APACS, the payments organisation representing high street banks, said the Cambridge breakthrough could be a threat.

‘People could, in theory, use this to steal account details from cards,’ said Sandra Quinn of APACS. ‘Our experts are in discussion with the manufacturers of terminals to see what can be done. Essentially what these people have done is replace the innards of a chip and Pin machine.

‘However, we would say that this has only been seen in a laboratory so far. People would not be able to create counterfeit chip and Pin cards, but they could use this information abroad to make purchases.’

(more…)

Mar 10, '06

I recently got an email from Bank of America offering me a pretty good credit card deal. Usually, I chuck those offers away as spam (both electronic and physical) but this time I decided to bite.

The “apply now” button pointed to http://links.em.bankofamerica.com:8083/…, fair enough. I click. But wait… IE6 says…

Certificate warning IE

Firefox provides more info without layers of abstraction…

Certificate warning FF

I clicked “OK” and got to… https://www.mynewcard.com/! (you’ll notice that going there directly redirects to https://mynewcard.bankofamerica.com/, so only when you click “apply” do you get to see mynewcard.com.)

I consequently emailed BofA with my concerns and got this (surprisingly expedient) reply:

“We recognize that any unsolicited e-mail, legitimate or otherwise, is reason for concern. I can assure you that www.mynewcard.com is a legitimate website of Bank of America.”

Well, not much assurance there since I replied to the original email (cardservices@replies.em.bankofamerica.com), but a whois query confirms that mynewcard.com indeed belongs to BofA. What percentage of the population would go beyond clicking that “OK” on the IE warning as just another annoyance? You know the answer.

So, BofA got three things wrong. Firstly, they had links in the body of the email; the argument has been beaten to the ground… don’t educate people to click them. If the bank has great offers, they should have them available when people log into their accounts. Secondly, they messed up on the certificate… it’s for mynewcard.bankofamerica.com, not what appears in the address bar, mynewcard.com. And finally, they used an unfamiliar domain to process the application. Why? I think the answer lies somewhere in the marketing department where they decided that mynewcard.com is cooler sounding than sound security measures and long term good customer training.

Update: Richard mentioned that the rapid response meant that BofA have heard this concern once before. I found this thread [dansanderson.com] discussing mynewcard.com in August 2003! Which adds a fourth thing BofA did wrong: they didn’t fix it!


Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Posts by Month

Posts by Category