Pico part III: Making Pico psychologically acceptable to the everyday user

Many users are willing to sacrifice some security to gain quick and easy access to their services, often in spite of advice from service providers. Users are somehow expected to use a unique password for every service, each sufficiently long and consisting of letters, numbers, and symbols. Since most users do not (indeed, cannot) follow all these rules, they rely on unrecommended coping strategies that make passwords more usable, including writing passwords down, using the same password for several services, and choosing easy-­to-­guess passwords, such as names and hobbies. But usable passwords are not secure passwords and users are blamed when things go wrong.

This isn’t just unreasonable, it’s unjustified, because even secure passwords are not immune to attack. A number of security breaches have had little to do with user practices and password strength, such as the Snapchat hacking incident, theft of Adobe customer records and passwords, and the Heartbleed bug. Stronger authentication requires a stronger and more usable authentication scheme, not longer and more complex passwords.

We have been evaluating the usability of our more secure, token-­based system: Pico, a small, dedicated device that authenticates you to services. Pico is resistant to theft-resistant because it only works when it is close to its owner, which it detects by communicating with other devices you own – Picosiblings. These devices are smaller and can be embedded in clothing and accessories. They create a cryptographic “aura” around you that unlocks Pico.

For people to adopt this new scheme, we need to make sure Pico is psychologically acceptable – unobtrusive and easily and routinely used, especially relative to passwords. The weaknesses of passwords have not been detrimental to their ubiquity because they are easy to administer, well understood, and require no additional hardware or software. Pico, on the other hand, is not currently easy to administer, is not widely understood, and does require additional hardware and software. The onus is on the Pico team to make our solution convenient and easy to use. If Pico is not sufficiently more convenient to use than passwords, it is likely to be rejected, regardless of improvements to security.

This is a particular challenge because we are not merely attempting to replace passwords as they are supposed to be used but as they are actually used by real people, which is more usable, though much less secure, than our current conception of Pico.

Small electronic devices worn by the user – typically watches, wristbands, or glasses – have had limited success (e.g. Rumba Time Go Watch and the Embrace+ Wristband). Reasons include issues with the accuracy of the data they collect, time spent having to manage data, the lack of control over appearance, the sense that these technologies are more gimmicks than useful, the monetary cost, and battery life (etc.). All of these issues need to be carefully considered to pass the user’s cost-benefit analysis of adoption.

To ensure the psychological acceptability of Pico, we have been conducting user studies from the very early design stages. The point of this research is to make sure we don’t impose any restrictive design decisions on users. We want users to be happy to adopt Pico and this requires a user-­centred approach to research and development based on early and frequent usability testing.

Thus far, we have qualitatively investigated user experiences of paper prototypes of Pico, which eventually informed the design of three-­dimensional plastic prototypes (Figure 1).

Figure 1a.
Figure 1. Left: Early paper and plasticine Pico prototypes; Right: Plastic (Polymorph) low-fidelity Pico prototypes

This exploratory research provided insight into whether early Pico designs were sufficient for allowing the user to navigate through common authentication tasks. We then conducted interviews with these plastic prototypes, asking participants which they preferred and why. In the same interviews, we presented participants with a range of pseudo-­Picosiblings (Figure 2) to get an idea of the feasibility of Picosiblings from the end­user’s perspective.

Figure 2.
Figure 2. The range of pseudo-Picosiblings including everyday items (watch, keys, accessories, etc.) and standalone options (magnetic clips and free-standing coins)

The challenge seems to be in finding a balance between cost, style, and usefulness. Consider, for example, the usefulness of a watch. While we expect a watch to tell us the time (to serve a function), what we really care about is its style and cost. This is the difference between my watch and your watch, and it is where we find its selling power. Most wearable electronic devices, such as smart-­watches and fitness gadgets, advertise function and usefulness first, and then style, which is often limited to one, or maybe two, designs. And cost? Well, you’ll just have to find a way to pay for it. Pico, like these devices, could provide the potential usefulness required for widespread and enduring adoption, which, if paired with low cost and user style, should have a greater degree of success than previous wearable electronic devices.

Initial analysis of the results reveals polarised opinions of how Pico and Picosiblings should look, from being fashionable and personalizable to being disguised and discrete. Interestingly, users seemed more concerned about the security of Pico than about the security of passwords. Generally, however, the initial research indicates that users do see the usefulness of Pico as a standalone device, providing it is reliable and can be used for a wide range of services; hardware is no benefit to people unless it replaces most, if not all, passwords, otherwise it becomes another thing that people have to remember.

A legitimate concern for users is loss or theft; we are working to ensure that such incidents do not cause inconvenience or pose threat to the user by making the system easily recoverable. Related concerns relevant to possessing physical devices are durability, physical ease-­of-­use, the awkwardness of having to handle and aim the Pico at a QR code, and the everyday convenience of remembering and carrying several devices.

To make remembering and carrying several devices easier and more worthwhile, interviews revealed that Picosiblings should have more than one function (e.g. watches, glasses, ID cards). By making Picosiblings practical, users are more likely to remember to take them, and to perceive the effort of carrying them around as being outweighed by the benefit. Typically, 3-­4 items were the maximum number of Picosiblings that users said they would be happy to carry; the aim would be to reduce the required number of Picosiblings to 1 or 2 (depending on what they are), allowing users to carry more on them as “backups” if they were going to use them anyway.

Though suggested by some, the same emphasis on dual-­function was not observed for Pico, since this device serves a sufficiently valuable function in itself. However, while many found it perfectly reasonable to carry a dedicated, secure device (given its function), some did express a preference for the convenience of an App on their smartphone. To create a more streamlined experience, we are currently working on such an App, which should give these potential Pico users the flexibility they seem to desire.

By taking into account these and other user opinions before committing to a single design and implementation, we are working to ensure Pico isn’t just theoretically secure – secure only if we can rely on users to implement it properly despite any inconvenience. Instead, we can make sure Pico is actually secure, because we will be creating an authentication scheme that requires users to do only what they would do (or better, want to do) anyway. We can achieve this by taking seriously the capabilities and preferences of the end-­user.

 

Pico part II: What’s wrong with QR code password replacement schemes, and how to fix them!

Users don’t want to authenticate, they want to do useful or enjoyable things like sending emails, ordering groceries or playing games. To alleviate the burden of having to type passwords, Pico and several other schemes, such as SQRL and tiQR, let the user simply scan a QR code; then a cryptographic protocol authenticates the user behind the scenes and initiates a session. But users, unless they are on the move, may prefer to run their email or web browsing sessions on their full-size computer instead of on their  smartphone, whose user interface is relatively limited. Therefore they don’t want an authenticated session between their smartphone and the website but between their computer and the website, even if it’s the smartphone that scans the QR code.

In the original 2011 Pico paper (footnote 37), the website kept track of which “page impression” from a web browser was related to which Pico authentication by including a nonce in each login page QR code and having the Pico sign and return it as part of the authentication. Since then, within the Pico team, there has been much discussion of the so-called Page Impression Nonce or PIN, infamous both for the attacks it enables and its unfortunate, overloaded acronym. While other schemes may have called it something different, or not called it anything at all, it was always present in one form or another because they all used it to solve this same problem of linking browser sessions to authentications.

For example, in the SQRL system each QR code contains a URL, part of which is a random nonce (the PIN in this system). The SQRL app must sign and return this URL, thus associating the nonce with the app’s per-verifier public key. The web browser then starts its session by making another request which includes the URL (and thus the PIN) and gets back a session cookie.

So what’s the problem?

The problem with this kind of mechanism is that anyone else who learns the PIN can also make that second request, thus logging themselves in as the user who scanned the QR code. For example, a bad guy can obtain a QR code and its PIN from the login page of bank.com and display it somewhere, like the login page of randomgameforum.com, for a victim to scan. Now, assuming the victim had an account at bank.com, the attacker obtains a bank.com session that the victim unsuspectingly initiated with their smartphone.

Part of the problem is that QR codes are not human-readable. Some have suggested that a simple confirmation step (“Do you really want to login to bank.com?”) might prevent such attacks, but we decided this wasn’t really good enough from a security or a usability perspective. We don’t want users to have to read the confirmation dialog and press the OK button every time they authenticate, and realistically they won’t, especially if they never normally do anything other than press OK.

Moreover, the confirmation step doesn’t help at all when the relaying of the QR code is combined with traditional phishing techniques. Consider receiving this email:

From: security@bank.com
To: victim@example.com
Subject: Urgent: Account security threat
---
Dear Customer

<compelling phishing mumbo jumbo>

To keep your account secure, please scan this QR code:

<login QR code with PIN known by the sender>

Kind regards,

Account security department

and if you oblige:

Do you really want to login to bank.com?

Now the poor user thinks “Well yes, I do, that’s exactly what the account security team asked me to do” and even worse: “I’m definitely not being phished, I remember what those security people kept telling me about checking the address of the website before logging in”.

How to fix it

The solution we came up with is called session delegation. Instead of having a nonce in each QR code, which anyone can later trade-in for an authenticated session, we have the website return a session delegation token to the Pico (not the web browser) as part of the authentication protocol. The Pico may then delegate the session to the browser on the bigger computer by sending it this token, via a secure channel. (For further details see section 4.1 of our “lousy phish” paper.) The price to pay for this strategy is that it requires a channel from the Pico to the browser, which is much harder to provide than the one in the opposite direction (the visual “QR code” channel).

We made a prototype which used Bluetooth for the delegation channel but, because Bluetooth was sometimes difficult to set up and not universally available, we even thought about using an audio cable plugged into the microphone jack of the computer. However, we were still worried about the availability and usability of these hardware-based solutions. We did a lot of research into NAT and firewall traversal techniques (such as STUN and TURN) to see if we could use peer-to-peer IP connectivity, but this is not possible in all cases without a separate signalling channel. In our latest prototype we’re using a “rendezvous point”, which is a very simple relay server we’ve designed, running in the public Internet. The rendezvous point is the most universal and usable solution, but does come with some privacy concerns, namely that the untrusted rendezvous server gets to see the Pico/computer IP address pairs which are communicating. So we still allow privacy-conscious users to adopt less convenient alternatives if they’re willing to pay the price of setting up Bluetooth, connecting cables or changing their firewall/NAT settings, but we don’t impose that cost on everyone.

The drawback of this approach is that the user’s computer requires some Pico software to receive the delegation tokens, via the rendezvous point or whatever other channel. Having to install these hurts the “deployability” of the system as a whole and could render it completely useless in situations where installing new software is not possible. But another innovation, making the delegation token take the form of a URL, means there is always a last-resort fallback channel: manual transcription. If a Pico user can’t install the software on, or doesn’t want to trust, a particular computer, they can always still retype the token URL. There are other security concerns related to having URLs which will log your browser into someone else’s account, but you’ll have to read the lousy phish paper for a more detailed discussion of this topic.

There is clearly much interest in finding a replacement for passwords and several schemes (such as US 8261089 B2Snap2Pass, tiQR, US 20130219479 A1, QRAuth, SQRL) propose using QR codes. But upon close inspection, all of the above use a page impression nonce, making them vulnerable to session hijacking attacks. We rejected the idea that this could be solved simply by getting the user to carry out more checks and instead we propose an architectural fix which provides a more secure basis for the design of Pico.

For more information about Pico, have a look at our website, sign up to our mailing list and stay tuned for more Pico-related posts on Light Blue Touchpaper in the near future.

Pico part I: Russian hackers stole a billion passwords? True or not, with Pico you wouldn’t worry about it.

In last week’s news (August 2014) we heard that Russian hackers stole 1.2 billion passwords. Even though such claims sound somewhat exaggerated, and not correlated with a proportional amount of fraudulent access to user accounts, password compromise is always a pain for the web sites involved—more so when it causes direct reputation damage by having the company name plastered on the front page of the Financial Times, as happened to eBay on 22 May 2014 after they lost to cybercriminals the passwords of over 100 million users. Shortly before that, in April 2014, it was the Heartbleed bug that forced password resets on allegedly 66% of all websites. And last year, in November 2013, it was Adobe who lost the passwords of 150 million users. Keep going back and you’ll find many more incidents. With alarming frequency we hear of some major security exploit that compromises an enormous number of passwords and embarrasses web sites into asking their users to pick a new password.

Note the irony: despite the complaints from some arrogant security experts that users are too lazy or too dumb to pick strong passwords, when such attacks take place, all users must change their passwords, not just those with a weak one. Even the diligent users who went to the trouble of following complicated instructions and memorizing “avKpt9cpGwdp”, not to mention typing it every day, are punished, for a sin they didn’t commit (the insecurity of the web site) just as much as the allegedly lazy ones who picked “p@ssw0rd” or “1234”. This is fundamentally unfair.

My team has been working on Pico, an ambitious project to replace passwords with a fairer system that does not require remembering secrets. The primary goal of Pico is to be easier to use than remembering a bunch of PINs and passwords; but, incidentally, it’s also meant to be much more secure. On that note, because Pico uses public key cryptography, if a Pico-based web site is compromised, then its users do not need to change their login credentials. The attackers can only steal the users’ public keys, not their private keys, and therefore are not able to impersonate them, neither at that site nor anywhere else (besides the fact that, to protect your privacy, your Pico uses a different key pair for every one of your accounts). This alone, even aside from any usability improvements, should be a good enough reason for web sites to convert to Pico.

We didn’t blog it then, but a few months ago we produced a short introductory video of our vision for Pico. On the Pico web site, besides that video and others, there are also frequently asked questions and, for those wanting to probe more deeply, a growing collection of technical papers.

phished

This is the first part in a series on the Pico project: my research associates will follow it up with further developments. Pico was recently featured in The Observer and on Sophos’s Naked Security blog, and is about to feature on BBC Radio 4’s PM programme on Tuesday 19 August at 17:00 (broadcast on Thursday 21 August 2014, with a slight cut; currently on iPlayer, starting at 46:28 . Full version broadcast on BBC World Service and downloadable, for a while, from the BBC Global News Podcast, starting at 21:37 ).

Update: the Pico web site now has a page with press coverage.

Largest ever civil government IT disaster

Last year I taught a systems course to students on the university’s Masters of Public Policy course (this is like an MBA but for civil servants). For their project work, I divided them into teams of three or four and got them to write a case history of a public-sector IT project that went wrong.

The class prize was won by Oliver Campion-Awwad, Alexander Hayton, Leila Smith and Mark Vuaran for The National Programme for IT in the NHS – A Case History. It’s now online, not just to acknowledge their excellent work and to inspire future MPP students, but also as a resource for people interested in what goes wrong with large public-sector IT projects, and how to do better in future.

Regular readers of this blog will recall a series of posts on this topic and related ones; yet despite the huge losses the government doesn’t seem to have learned much at all.

There is more information on our MPP course here, while my teaching materials are available here. With luck, the next generation of civil servants won’t be quite as clueless.

Privacy with technology: where do we go from here?

As part of the Royal Society Summer Science Exhibition 2014, I spoke at the panel session “Privacy with technology: where do we go from here?”, along with Ross Anderson, and Bashar Nuseibeh with Jon Crowcroft as chair.

The audio recording is available and some notes from the session are below.

The session started with brief presentations from each of the panel members. Ross spoke on the economics of surveillance and in particular network effects, the topic of his paper at WEIS 2014.

Bashar discussed the difficulties of requirements engineering, as eloquently described by Billy Connolly. These challenges are particularly acute when it comes to designing for privacy requirements, especially for wearable devices with their limited ability to communicate with users.

I described issues around surveillance on the Internet, whether by governments targeting human rights workers or advertisers targeting pregnant customers. I discussed how anonymous communication tools, such as Tor, can help defend against such surveillance.

Continue reading Privacy with technology: where do we go from here?

First Global Deception Conference

Global Deception conference, Oxford, 17–19th of July 2014

Conference introduction

This deception conference, as part of Hostility and Violence, was organized by interdisciplinary net. Interdisciplinary net runs about 75 conferences a year and was set up by Rob Fisher in 1999 to facilitate international dialogue between disciplines. Conferences are organized on a range of topics, such as gaming, empathycyber cultures, violence and communication and conflict. Not just the topics of the different conferences are interdisciplinary, this is the case within each conference as well. During our deception conference we approached deception from very different angles; from optical illusions in art and architecture via literary hoaxes, fiction and spy novels to the role of the media in creating false beliefs amongst society and ending with a more experimental approach to detecting deception. Even a magic trick was part of the (informal) program, and somehow I ended up being the magician’s assistant. You can find my notes and abstracts below.

Finally, if you (also) have an interest in more experimental deception research with a high practical applicability, then we have good news. Aldert Vrij, Ross Anderson and I are hosting a deception conference to bring together deception researchers and law enforcement people from all over the world. This event will take place at Cambridge University on August 22-24, 2015.

Session 1 – Hoaxes

John Laurence Busch: Deceit without, deceit within: The British Government behavior in the secret race to claim steam-powered superiority at sea. Lord Liverpool became prime minister in 1812 and wanted to catch up with the Americans regarding steam-powered boats. The problem however was that the Royal Navy did not know how to build those vessels, so they joined the British Post in 1820 who wanted to build steam powered boats to deliver post to Ireland more quickly. The post was glad the navy wants to collaborate, although the navy was deceptive; they kept quiet, both to the post, the public and other countries, that they did not know how to build those vessels, and that were hoping to learn how to build a steam boat from them, which succeeded, importantly whilst successfully masking/hiding from the French and the Americans that the British Navy was working on steam vessels to catch up with the US. So the Navy was hiding something questionable (military activity) behind something innocent (post); deceptive public face.

Catelijne Coopmans & Brian Rappert: Revealing deception and its discontents: Scrutinizing belief and skepticism about the moon landing. The moon landing in the 60s is a possible deceptive situation in which the stakes are high and is of high symbolic value. A 2001 documentary by Fox “Conspiracy theory: Did we land on the moon or not?” The documentary bases their suspicions mainly on photographic and visual evidence, such as showing shadows where they shouldn’t be, a “c” shape on a stone, a flag moving in a breeze and pictures with exactly the same background but with different foregrounds. As a response, several people have explained these inconsistencies (e.g., the C was a hair). The current authors focus more on the paradoxes that surround and maybe even fuel these conspiracy theories, such as disclosure vs. non-disclosure, secrecy that fuels suspicion. Like the US governments secrecy around Area 51. Can you trust and at the same time not trust the visual proof of the moan landing presented by NASA? Although the quality of the pictures was really bad, the framing was really well done. Apollo 11 tried to debunk this conspiracy theory by showing a picture of the flag currently still standing on the moon. But then, that could be photoshopped…

Discussion: How can you trust a visual image, especially when used to proof something, when we live in a world where technology makes it possible to fake anything with a high standard? Continue reading First Global Deception Conference

European Association of Psychology and Law Conference 2014

The European Association of Psychology and Law (EAPL) annually organises a conference to bring together researchers and practitioners operating in a forensic context. Combining different disciplines, such as psychology, criminology and law leads to a multidisciplinary conference with presentations on topics like detecting deception, false memories, presenting forensic evidence in court, investigative interviewing, risk assessment, offenders, victims and eyewitness identification (see program). This year’s conference took place during the 24-27th of June in St. Petersburg and I summarised a selection of talks given during this conference.

Tuesday the 24th of June, 2014 Symposium 16.30-18.00 – Allegation: True or false

Van Koppen: I don’t know why I did it: motives for filing false allegations of rape. The first in a series of three talks (see Horselenberg & de Zutter). Explained the basis of the false allegations of rape in a Dutch & Belgian research project. Their conclusions are that the existing data (Viclas) and models are insufficient. Researchers on the current project went through rape cases between 1997-2011 and found more than 50 false allegations. Subsequently, they investigated the reasons why these people made false allegations and found in addition to the already known factors (especially emotional reasons and the alibi factor were often present; on the other hand, mental issues and vigilance were not) that in a substantial amount of cases it was unknown why this person made a false allegation of sexual abuse. Some people reported not knowing why they made the false allegation (even when pressured by the interviewer to provide a reason), and in other cases the researchers couldn’t find out because the police hadn’t asked or did not write down the reasons down, so it wasn’t in the case file. In conclusion, false allegations of rape happen, they cause problems, and it is not always clear why people make these false allegations.

Continue reading European Association of Psychology and Law Conference 2014

The CHERI capability model: Revisiting RISC in an age of risk (ISCA 2014)

Last week, Jonathan Woodruff presented our joint paper on the CHERI memory model, The CHERI capability model: Revisiting RISC in an age of risk, at the 2014 International Symposium on Computer Architecture (ISCA) in Minneapolis (video, slides). This is our first full paper on Capability Hardware Enhanced RISC Instructions (CHERI), collaborative work between Simon Moore’s and my team composed of members of the Security, Computer Architecture, and Systems Research Groups at the University of Cambridge Computer Laboratory, Peter G. Neumann’s group at the Computer Science Laboratory at SRI International, and Ben Laurie at Google.

CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust.

The approach draws inspiration from Capsicum, our OS-facing hybrid capability-system model now shipping in FreeBSD and available as a patch for Linux courtesy Google. We found that capability-system approaches matched extremely well with least-privilege oriented software compartmentalisation, in which programs are broken up into sandboxed components to mitigate the effects of exploited vulnerabilities. CHERI similarly merges research capability-system ideas with a conventional RISC processor design, making accessible the security and robustness benefits of the former, while retaining software compatibility with the latter. In the paper, we contrast our approach with a number of others including Intel’s forthcoming Memory Protection eXtensions (MPX), but in particular pursue a RISC-oriented design instantiated against the 64-bit MIPS ISA, but the ideas should be portable to other RISC ISAs such as ARMv8 and RISC-V.

Our hardware prototype is implemented in Bluespec System Verilog, a high-level hardware description language (HDL) that makes it easier to perform design-space exploration. To facilitate both reproducibility for this work, and also future hardware-software research, we’ve open sourced the underlying Bluespec Extensible RISC Implementation (BERI), our CHERI extensions, and a complete software stack: operating system, compiler, and so on. In fact, support for the underlying 64-bit RISC platform, which implements a version of the 64-bit MIPS ISA, was upstreamed to FreeBSD 10.0, which shipped earlier this year. Our capability-enhanced versions of FreeBSD (CheriBSD) and Clang/LLVM are distributed via GitHub.

You can learn more about CHERI, BERI, and our larger clean-slate hardware-software agenda on the CTSRD Project Website. There, you will find copies of our prior workshop papers, full Bluespec source code for the FPGA processor design, hardware build instructions for our FPGA-based tablet, downloadable CheriBSD images, software source code, and also our recent technical report, Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture, and Jon Woodruff’s PhD dissertation on CHERI.

Jonathan Woodruff, Robert N. M. Watson, David Chisnall, Simon W. Moore, Jonathan Anderson, Brooks Davis, Ben Laurie, Peter G. Neumann, Robert Norton, and Michael Roe. The CHERI capability model: Revisiting RISC in an age of risk, Proceedings of the 41st International Symposium on Computer Architecture (ISCA 2014), Minneapolis, MN, USA, June 14–16, 2014.

EMV: Why Payment Systems Fail

In the latest edition of Communications of the ACM, Ross Anderson and I have an article in the Inside Risks column: “EMV: Why Payment Systems Fail” (DOI 10.1145/2602321).

Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).

While the EMV specification is the same across the world, they way each country uses it varies substantially. Even individual banks within a country may make different implementation choices which have an impact on security. The US will prove to be an especially interesting case study because some banks will be choosing Chip and PIN (as the UK has done) while others will choose Chip and Signature (as Singapore did). The US will act as a natural experiment addressing the question of whether Chip and PIN or Chip and Signature is better, and from whose perspective?

The US is also distinctive in that the major tussle over payment card security is over the “interchange” fees paid by merchants to the banks which issue the cards used. Interchange fees are about an order of magnitude higher than losses due to fraud, so while security is one consideration in choosing different sets of EMV features, the question of who pays how much in fees is a more important factor (even if the decision is later claimed to be justified by security). We’re already seeing results of this fight in the courts and through legislation.

EMV is coming to the US, so it is important that banks, customers, merchants and regulators know the likely consequences and how to manage the risks, learning from the lessons of the UK and elsewhere. Discussion of these and further issues can be found in our article.