Monthly Archives: July 2014

Privacy with technology: where do we go from here?

As part of the Royal Society Summer Science Exhibition 2014, I spoke at the panel session “Privacy with technology: where do we go from here?”, along with Ross Anderson, and Bashar Nuseibeh with Jon Crowcroft as chair.

The audio recording is available and some notes from the session are below.

The session started with brief presentations from each of the panel members. Ross spoke on the economics of surveillance and in particular network effects, the topic of his paper at WEIS 2014.

Bashar discussed the difficulties of requirements engineering, as eloquently described by Billy Connolly. These challenges are particularly acute when it comes to designing for privacy requirements, especially for wearable devices with their limited ability to communicate with users.

I described issues around surveillance on the Internet, whether by governments targeting human rights workers or advertisers targeting pregnant customers. I discussed how anonymous communication tools, such as Tor, can help defend against such surveillance.

Continue reading Privacy with technology: where do we go from here?

First Global Deception Conference

Global Deception conference, Oxford, 17–19th of July 2014

Conference introduction

This deception conference, as part of Hostility and Violence, was organized by interdisciplinary net. Interdisciplinary net runs about 75 conferences a year and was set up by Rob Fisher in 1999 to facilitate international dialogue between disciplines. Conferences are organized on a range of topics, such as gaming, empathycyber cultures, violence and communication and conflict. Not just the topics of the different conferences are interdisciplinary, this is the case within each conference as well. During our deception conference we approached deception from very different angles; from optical illusions in art and architecture via literary hoaxes, fiction and spy novels to the role of the media in creating false beliefs amongst society and ending with a more experimental approach to detecting deception. Even a magic trick was part of the (informal) program, and somehow I ended up being the magician’s assistant. You can find my notes and abstracts below.

Finally, if you (also) have an interest in more experimental deception research with a high practical applicability, then we have good news. Aldert Vrij, Ross Anderson and I are hosting a deception conference to bring together deception researchers and law enforcement people from all over the world. This event will take place at Cambridge University on August 22-24, 2015.

Session 1 – Hoaxes

John Laurence Busch: Deceit without, deceit within: The British Government behavior in the secret race to claim steam-powered superiority at sea. Lord Liverpool became prime minister in 1812 and wanted to catch up with the Americans regarding steam-powered boats. The problem however was that the Royal Navy did not know how to build those vessels, so they joined the British Post in 1820 who wanted to build steam powered boats to deliver post to Ireland more quickly. The post was glad the navy wants to collaborate, although the navy was deceptive; they kept quiet, both to the post, the public and other countries, that they did not know how to build those vessels, and that were hoping to learn how to build a steam boat from them, which succeeded, importantly whilst successfully masking/hiding from the French and the Americans that the British Navy was working on steam vessels to catch up with the US. So the Navy was hiding something questionable (military activity) behind something innocent (post); deceptive public face.

Catelijne Coopmans & Brian Rappert: Revealing deception and its discontents: Scrutinizing belief and skepticism about the moon landing. The moon landing in the 60s is a possible deceptive situation in which the stakes are high and is of high symbolic value. A 2001 documentary by Fox “Conspiracy theory: Did we land on the moon or not?” The documentary bases their suspicions mainly on photographic and visual evidence, such as showing shadows where they shouldn’t be, a “c” shape on a stone, a flag moving in a breeze and pictures with exactly the same background but with different foregrounds. As a response, several people have explained these inconsistencies (e.g., the C was a hair). The current authors focus more on the paradoxes that surround and maybe even fuel these conspiracy theories, such as disclosure vs. non-disclosure, secrecy that fuels suspicion. Like the US governments secrecy around Area 51. Can you trust and at the same time not trust the visual proof of the moan landing presented by NASA? Although the quality of the pictures was really bad, the framing was really well done. Apollo 11 tried to debunk this conspiracy theory by showing a picture of the flag currently still standing on the moon. But then, that could be photoshopped…

Discussion: How can you trust a visual image, especially when used to proof something, when we live in a world where technology makes it possible to fake anything with a high standard? Continue reading First Global Deception Conference

European Association of Psychology and Law Conference 2014

The European Association of Psychology and Law (EAPL) annually organises a conference to bring together researchers and practitioners operating in a forensic context. Combining different disciplines, such as psychology, criminology and law leads to a multidisciplinary conference with presentations on topics like detecting deception, false memories, presenting forensic evidence in court, investigative interviewing, risk assessment, offenders, victims and eyewitness identification (see program). This year’s conference took place during the 24-27th of June in St. Petersburg and I summarised a selection of talks given during this conference.

Tuesday the 24th of June, 2014 Symposium 16.30-18.00 – Allegation: True or false

Van Koppen: I don’t know why I did it: motives for filing false allegations of rape. The first in a series of three talks (see Horselenberg & de Zutter). Explained the basis of the false allegations of rape in a Dutch & Belgian research project. Their conclusions are that the existing data (Viclas) and models are insufficient. Researchers on the current project went through rape cases between 1997-2011 and found more than 50 false allegations. Subsequently, they investigated the reasons why these people made false allegations and found in addition to the already known factors (especially emotional reasons and the alibi factor were often present; on the other hand, mental issues and vigilance were not) that in a substantial amount of cases it was unknown why this person made a false allegation of sexual abuse. Some people reported not knowing why they made the false allegation (even when pressured by the interviewer to provide a reason), and in other cases the researchers couldn’t find out because the police hadn’t asked or did not write down the reasons down, so it wasn’t in the case file. In conclusion, false allegations of rape happen, they cause problems, and it is not always clear why people make these false allegations.

Continue reading European Association of Psychology and Law Conference 2014

The CHERI capability model: Revisiting RISC in an age of risk (ISCA 2014)

Last week, Jonathan Woodruff presented our joint paper on the CHERI memory model, The CHERI capability model: Revisiting RISC in an age of risk, at the 2014 International Symposium on Computer Architecture (ISCA) in Minneapolis (video, slides). This is our first full paper on Capability Hardware Enhanced RISC Instructions (CHERI), collaborative work between Simon Moore’s and my team composed of members of the Security, Computer Architecture, and Systems Research Groups at the University of Cambridge Computer Laboratory, Peter G. Neumann’s group at the Computer Science Laboratory at SRI International, and Ben Laurie at Google.

CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust.

The approach draws inspiration from Capsicum, our OS-facing hybrid capability-system model now shipping in FreeBSD and available as a patch for Linux courtesy Google. We found that capability-system approaches matched extremely well with least-privilege oriented software compartmentalisation, in which programs are broken up into sandboxed components to mitigate the effects of exploited vulnerabilities. CHERI similarly merges research capability-system ideas with a conventional RISC processor design, making accessible the security and robustness benefits of the former, while retaining software compatibility with the latter. In the paper, we contrast our approach with a number of others including Intel’s forthcoming Memory Protection eXtensions (MPX), but in particular pursue a RISC-oriented design instantiated against the 64-bit MIPS ISA, but the ideas should be portable to other RISC ISAs such as ARMv8 and RISC-V.

Our hardware prototype is implemented in Bluespec System Verilog, a high-level hardware description language (HDL) that makes it easier to perform design-space exploration. To facilitate both reproducibility for this work, and also future hardware-software research, we’ve open sourced the underlying Bluespec Extensible RISC Implementation (BERI), our CHERI extensions, and a complete software stack: operating system, compiler, and so on. In fact, support for the underlying 64-bit RISC platform, which implements a version of the 64-bit MIPS ISA, was upstreamed to FreeBSD 10.0, which shipped earlier this year. Our capability-enhanced versions of FreeBSD (CheriBSD) and Clang/LLVM are distributed via GitHub.

You can learn more about CHERI, BERI, and our larger clean-slate hardware-software agenda on the CTSRD Project Website. There, you will find copies of our prior workshop papers, full Bluespec source code for the FPGA processor design, hardware build instructions for our FPGA-based tablet, downloadable CheriBSD images, software source code, and also our recent technical report, Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture, and Jon Woodruff’s PhD dissertation on CHERI.

Jonathan Woodruff, Robert N. M. Watson, David Chisnall, Simon W. Moore, Jonathan Anderson, Brooks Davis, Ben Laurie, Peter G. Neumann, Robert Norton, and Michael Roe. The CHERI capability model: Revisiting RISC in an age of risk, Proceedings of the 41st International Symposium on Computer Architecture (ISCA 2014), Minneapolis, MN, USA, June 14–16, 2014.

EMV: Why Payment Systems Fail

In the latest edition of Communications of the ACM, Ross Anderson and I have an article in the Inside Risks column: “EMV: Why Payment Systems Fail” (DOI 10.1145/2602321).

Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).

While the EMV specification is the same across the world, they way each country uses it varies substantially. Even individual banks within a country may make different implementation choices which have an impact on security. The US will prove to be an especially interesting case study because some banks will be choosing Chip and PIN (as the UK has done) while others will choose Chip and Signature (as Singapore did). The US will act as a natural experiment addressing the question of whether Chip and PIN or Chip and Signature is better, and from whose perspective?

The US is also distinctive in that the major tussle over payment card security is over the “interchange” fees paid by merchants to the banks which issue the cards used. Interchange fees are about an order of magnitude higher than losses due to fraud, so while security is one consideration in choosing different sets of EMV features, the question of who pays how much in fees is a more important factor (even if the decision is later claimed to be justified by security). We’re already seeing results of this fight in the courts and through legislation.

EMV is coming to the US, so it is important that banks, customers, merchants and regulators know the likely consequences and how to manage the risks, learning from the lessons of the UK and elsewhere. Discussion of these and further issues can be found in our article.